From Casetext: Smarter Legal Research

State v. Biden

United States Court of Appeals, Fifth Circuit
Oct 3, 2023
83 F.4th 350 (5th Cir. 2023)

Summary

discussing state action requirement in the First Amendment context

Summary of this case from Ass'n of Am. Physicians & Surgeons Educ. Found. v. Am. Bd. of Internal Med.

Opinion

No. 23-30445

10-03-2023

State of MISSOURI; State of Louisiana; Aaron Kheriaty; Martin Kulldorff; Jim Hoft; Jayanta Bhattacharya; Jill Hines, Plaintiffs—Appellees, v. Joseph R. BIDEN, Jr.; Vivek H. Murthy; Xavier Becerra; Department of Health & Human Services; Anthony Fauci; Et al., Defendants—Appellants.

Joshua M. Divine, Solicitor, Todd Scott, Office of The Missouri Attorney General, Jefferson City, MO, Elizabeth Baker Murrill, Esq., Assistant Attorney General, Office of the Attorney General for the State of Louisiana, Baton Rouge, LA, for Plaintiff—Appellee State of Missouri. Elizabeth Baker Murrill, Esq., Assistant Attorney General, Office of the Attorney General for the State of Louisiana, Baton Rouge, LA, Dean John Sauer, James Otis Law Group, L.L.C., Saint Louis, MO, Tracy Short, Office of the Attorney General for the State of Louisiana, Baton Rouge, LA for Plaintiff—Appellee State of Louisiana. John Julian Vecchione, Jenin Younes, New Civil Liberties Alliance, Washington, DC, Elizabeth Baker Murrill, Esq., Assistant Attorney General, Office of the Attorney General for the State of Louisiana, Baton Rouge, LA, for Plaintiffs—Appellees Aaron Kheriaty, Martin Kulldorff, Jayanta Bhattacharya, and Jill Hines. Jonathon Christian Burns, St. Louis, MO, Elizabeth Baker Murrill, Esq., Assistant Attorney General, Office of the Attorney General for the State of Louisiana, Baton Rouge, LA, for Plaintiff—Appellee Jim Hoft. Daniel Winik, U.S. Department of Justice, Civil Division, Appellate Section, Washington, DC, Simon Christopher Brewer, Attorney, Daniel Bentele Hahs Tenny, Esq., U.S. Department of Justice, Civil Division, Washington, DC, for Defendants—Appellants. Glynn Shelly Maturin, II, Welborn & Hargett, L.L.C., Lafayette, LA, for Amicus Curiae Children's Health Defense. John B. Bellinger, III, R. Stanton Jones, Elisabeth S. Theodore, Stephen K. Wirth, Arnold & Porter Kaye Scholer, L.L.P., Washington, DC, for Amici Curiae Leland Stanford Junior University, Alex Stamos, and Renee DiResta. David A. Greene, Electronic Frontier Foundation, San Francisco, CA, for Amicus Curiae Electronic Frontier Foundation. Jay A. Sekulow, Esq., American Center for Law & Justice, Washington, DC, for Amicus Curiae American Center for Law and Justice. Leonid Goldstein, Tyler, TX, Pro Se. Scott L. Winkelman, Crowell & Moring, L.L.P., Washington, DC, Jon Marshall Greenbaum, Esq., Director, Lawyers' Committee for Civil Rights Under Law, Washington, DC, for Amicus Curiae Lawyers' Committee for Civil Rights Under Law. Scott L. Winkelman, Crowell & Moring, L.L.P., Washington, DC, or Amici Curiae Brennan Center for Justice at New York University School of Law, and Common Cause. Grace Zhou, Office of the Attorney General for the State of New York, New York, NY, for Amici Curiae State of New York, State of Arizona, State of California, State of Colorado, State of Connecticut, State of Delaware, State of Hawaii, State of Illinois, State of Maine, Commonwealth of Massachusetts, State of Michigan, State of Minnesota, State of Nevada, State of New Jersey, State of New Mexico, State of Oregon, Commonwealth of Pennsylvania, State of Rhode Island, State of Vermont, State of Wisconsin, and District of Columbia. David Anthony Dalia, New Orleans, LA, for Amici Curiae America's Frontline Doctors, and Dr. Simone Gold, M.D., J.D. Jay R. Carson, Wegman Hessler Valore, Cleveland, OH, for Amicus Curiae The Buckeye Institute. Peter Martin Torstensen, Jr., Montana Attorney General's Office, Solicitor General's Office, Helena, MT, for Amici Curiae State of Montana, State of Idaho, State of Iowa, State of Kansas, State of Nebraska, State of South Carolina, and State of Utah. Travis Christopher Barham, Alliance Defending Freedom, Lawrenceville, GA, for Amicus Curiae Alliance Defending Freedom. Henry Charles Whitaker, Office of the Attorney General for the State of Florida, Tallahassee, FL, for Amicus Curiae State of Florida. Eric Arthur Sell, Mount Airy, MD, for Amicus Curiae Center for American Liberty. William Jeffrey Olson, Esq., William J. Olson, P.C., Vienna, VA, for Amici Curiae Free Speech Coalition, Free Speech Defense and Education Fund, Gun Owners of America, Incorporated, Gun Owners Foundation, Gun Owners of California, Tennessee Firearms Association, Public Advocate of the United States, U.S. Constitutional Rights Legal Defense Fund, Leadership Institute, One Nation Under God Foundation, Downsize DC.org, Downsize DC Foundation, Eagle Forum, Eagle Forum Foundation, The Western Journal, and Conservative Legal Defense and Education Fund. Christopher A. Ferrara, Whitestone, NY, Brennan Tyler Brooks, Esq., Senior Trial Counsel, Thomas More Society, Chicago, IL, for Amicus Curiae Angela Reading. Christopher Ernest Mills, Spero Law, L.L.C., Charleston, SC, Gene Patrick Hamilton, America First Legal Foundation, Washington, DC, for Amici Curiae Jim Jordan, Kelly Armstrong, Andy Biggs, Dan Bishop, Kat Cammack, Russell Fry, Lance Gooden, Harriet Hageman, Mike Johnson, Thomas Massie, Barry Moore, and Elise Stefanik.


Appeal from the United States District Court for the Western District of Louisiana, USDC NO. 3:22-CV-1213, Terry A. Doughty, Chief Judge

Joshua M. Divine, Solicitor, Todd Scott, Office of The Missouri Attorney General, Jefferson City, MO, Elizabeth Baker Murrill, Esq., Assistant Attorney General, Office of the Attorney General for the State of Louisiana, Baton Rouge, LA, for Plaintiff—Appellee State of Missouri.

Elizabeth Baker Murrill, Esq., Assistant Attorney General, Office of the Attorney General for the State of Louisiana, Baton Rouge, LA, Dean John Sauer, James Otis Law Group, L.L.C., Saint Louis, MO, Tracy Short, Office of the Attorney General for the State of Louisiana, Baton Rouge, LA for Plaintiff—Appellee State of Louisiana.

John Julian Vecchione, Jenin Younes, New Civil Liberties Alliance, Washington, DC, Elizabeth Baker Murrill, Esq., Assistant Attorney General, Office of the Attorney General for the State of Louisiana, Baton Rouge, LA, for Plaintiffs—Appellees Aaron Kheriaty, Martin Kulldorff, Jayanta Bhattacharya, and Jill Hines.

Jonathon Christian Burns, St. Louis, MO, Elizabeth Baker Murrill, Esq., Assistant Attorney General, Office of the Attorney General for the State of Louisiana, Baton Rouge, LA, for Plaintiff—Appellee Jim Hoft.

Daniel Winik, U.S. Department of Justice, Civil Division, Appellate Section, Washington, DC, Simon Christopher Brewer, Attorney, Daniel Bentele Hahs Tenny, Esq., U.S. Department of Justice, Civil Division, Washington, DC, for Defendants—Appellants.

Glynn Shelly Maturin, II, Welborn & Hargett, L.L.C., Lafayette, LA, for Amicus Curiae Children's Health Defense.

John B. Bellinger, III, R. Stanton Jones, Elisabeth S. Theodore, Stephen K. Wirth, Arnold & Porter Kaye Scholer, L.L.P., Washington, DC, for Amici Curiae Leland Stanford Junior University, Alex Stamos, and Renee DiResta.

David A. Greene, Electronic Frontier Foundation, San Francisco, CA, for Amicus Curiae Electronic Frontier Foundation.

Jay A. Sekulow, Esq., American Center for Law & Justice, Washington, DC, for Amicus Curiae American Center for Law and Justice.

Leonid Goldstein, Tyler, TX, Pro Se. Scott L. Winkelman, Crowell & Moring, L.L.P., Washington, DC, Jon Marshall Greenbaum, Esq., Director, Lawyers' Committee for Civil Rights Under Law, Washington, DC, for Amicus Curiae Lawyers' Committee for Civil Rights Under Law.

Scott L. Winkelman, Crowell & Moring, L.L.P., Washington, DC, or Amici Curiae Brennan Center for Justice at New York University School of Law, and Common Cause.

Grace Zhou, Office of the Attorney General for the State of New York, New York, NY, for Amici Curiae State of New York, State of Arizona, State of California, State of Colorado, State of Connecticut, State of Delaware, State of Hawaii, State of Illinois, State of Maine, Commonwealth of Massachusetts, State of Michigan, State of Minnesota, State of Nevada, State of New Jersey, State of New Mexico, State of Oregon, Commonwealth of Pennsylvania, State of Rhode Island, State of Vermont, State of Wisconsin, and District of Columbia.

David Anthony Dalia, New Orleans, LA, for Amici Curiae America's Frontline Doctors, and Dr. Simone Gold, M.D., J.D.

Jay R. Carson, Wegman Hessler Valore, Cleveland, OH, for Amicus Curiae The Buckeye Institute.

Peter Martin Torstensen, Jr., Montana Attorney General's Office, Solicitor General's Office, Helena, MT, for Amici Curiae State of Montana, State of Idaho, State of Iowa, State of Kansas, State of Nebraska, State of South Carolina, and State of Utah.

Travis Christopher Barham, Alliance Defending Freedom, Lawrenceville, GA, for Amicus Curiae Alliance Defending Freedom.

Henry Charles Whitaker, Office of the Attorney General for the State of Florida, Tallahassee, FL, for Amicus Curiae State of Florida.

Eric Arthur Sell, Mount Airy, MD, for Amicus Curiae Center for American Liberty.

William Jeffrey Olson, Esq., William J. Olson, P.C., Vienna, VA, for Amici Curiae Free Speech Coalition, Free Speech Defense and Education Fund, Gun Owners of America, Incorporated, Gun Owners Foundation, Gun Owners of California, Tennessee Firearms Association, Public Advocate of the United States, U.S. Constitutional Rights Legal Defense Fund, Leadership Institute, One Nation Under God Foundation, Downsize DC.org, Downsize DC Foundation, Eagle Forum, Eagle Forum Foundation, The Western Journal, and Conservative Legal Defense and Education Fund.

Christopher A. Ferrara, Whitestone, NY, Brennan Tyler Brooks, Esq., Senior Trial Counsel, Thomas More Society, Chicago, IL, for Amicus Curiae Angela Reading.

Christopher Ernest Mills, Spero Law, L.L.C., Charleston, SC, Gene Patrick Hamilton, America First Legal Foundation, Washington, DC, for Amici Curiae Jim Jordan, Kelly Armstrong, Andy Biggs, Dan Bishop, Kat Cammack, Russell Fry, Lance Gooden, Harriet Hageman, Mike Johnson, Thomas Massie, Barry Moore, and Elise Stefanik.

Before Clement, Elrod, and Willett, Circuit Judges.

ON PETITION FOR REHEARING

Per Curiam:

The petition for panel rehearing is GRANTED. We WITHDRAW our previous opinion and substitute the following.

* * * A group of social-media users and two states allege that numerous federal officials coerced social-media platforms into censoring certain social-media content, in violation of the First Amendment. We agree, but only as to some of those officials. So, we AFFIRM in part, REVERSE in part, VACATE the injunction in part, and MODIFY the injunction in part.

I.

For the last few years—at least since the 2020 presidential transition—a group of federal officials has been in regular contact with nearly every major American social-media company about the spread of "misinformation" on their platforms. In their concern, those officials—hailing from the White House, the CDC, the FBI, and a few other agencies—urged the platforms to remove disfavored content and accounts from their sites. And, the platforms seemingly complied. They gave the officials access to an expedited reporting system, downgraded or removed flagged posts, and deplatformed users. The platforms also changed their internal policies to capture more flagged content and sent steady reports on their moderation activities to the officials. That went on through the COVID-19 pandemic, the 2022 congressional election, and continues to this day.

Enter this lawsuit. The Plaintiffs—three doctors, a news website, a healthcare activist, and two states—had posts and stories removed or downgraded by the platforms. Their content touched on a host of divisive topics like the COVID-19 lab-leak theory, pandemic lockdowns, vaccine side-effects, election fraud, and the Hunter Biden laptop story. The Plaintiffs maintain that although the platforms stifled their speech, the government officials were the ones pulling the strings—they "coerced, threatened, and pressured [the] social-media platforms to censor [them]" through private communications and legal threats. So, they sued the officials for First Amendment violations and asked the district court to enjoin the officials' conduct. In

Specifically, the Plaintiffs are (1) Jayanta Bhattacharya and Martin Kulldorff, two epidemiologists who co-wrote the Great Barrington Declaration, an article criticizing COVID-19 lockdowns; (2) Jill Hines, an activist who spearheaded "Reopen Louisiana"; (3) Aaron Kheriaty, a psychiatrist who opposed lockdowns and vaccine mandates; (4) Jim Hoft, the owner of the Gateway Pundit, a once-deplatformed news site; and (5) Missouri and Louisiana, who assert their sovereign and quasi-sovereign interests in protecting their citizens and the free flow of information. Bhattacharya, Kulldorff, Hines, Kheriaty, and Hoft, collectively, are referred to herein as the "Individual Plaintiffs." Missouri and Louisiana, together, are referred to as the "State Plaintiffs."

The defendant-officials include (1) the President; (2) his Press Secretary; (3) the Surgeon General; (4) the Department of Health and Human Services; (5) the HHS's Director; (6) Anthony Fauci in his capacity as the Director of the National Institute of Allergy and Infectious Diseases; (7) the NIAID; (8) the Centers for Disease Control; (9) the CDC's Digital Media Chief; (10) the Census Bureau; (11) the Senior Advisor for Communications at the Census Bureau; (12) the Department of Commerce; (13) the Secretary of the Department of Homeland Security; (14) the Senior Counselor to the Secretary of the DHS; (15) the DHS; (16) the Cybersecurity and Infrastructure Security Agency; (17) the Director of CISA; (18) the Department of Justice; (19) the Federal Bureau of Investigation; (20) a special agent of the FBI; (21) a section chief of the FBI; (22) the Food and Drug Administration; (23) the Director of Social Media at the FDA; (24) the Department of State; (25) the Department of Treasury; (26) the Department of Commerce; and (27) the Election Assistance Commission. The Plaintiffs also sued a host of various advisors, officials, and deputies in the White House, the FDA, the CDC, the Census Bureau, the HHS, and CISA. Note that some of these officials were not enjoined and, therefore, are not mentioned again in this opinion.

response, the officials argued that they only "sought to mitigate the hazards of online misinformation" by "calling attention to content" that violated the "platforms' policies," a form of permissible government speech.

The district court agreed with the Plaintiffs and granted preliminary injunctive relief. In reaching that decision, it reviewed the conduct of several federal offices, but only enjoined the White House, the Surgeon General, the CDC, the FBI, the National Institute of Allergy and Infectious Diseases (NIAID), the Cybersecurity and Infrastructure Security Agency (CISA), and the Department of State. We briefly review—per the district court's order and the record—those officials' conduct.

A.

Considering their close cooperation and the ministerial ecosystem, we take the White House and the Surgeon General's office together. Officials from both offices began communicating with social media companies—including Facebook, Twitter (now known as "X"), YouTube, and Google—in early 2021. From the outset, that came with requests to take down flagged content. In one email, a White House official told a platform to take a post down "ASAP," and instructed it to "keep an eye out for tweets that fall in this same [] genre" so that they could be removed, too. In another, an official told a platform to "remove [an] account immediately"—he could not "stress the degree to which this needs to be resolved immediately." Often, those requests for removal were met.

But, the White House officials did not only flag content. Later that year, they started monitoring the platforms' moderation activities, too. In that vein, the officials asked for—and received—frequent updates from the platforms. Those updates revealed, however, that the platforms' policies were not clear-cut and did not always lead to content being demoted. So, the White House pressed the platforms. For example, one White House official demanded more details and data on Facebook's internal policies at least twelve times, including to ask what was being done to curtail "dubious" or "sensational" content, what "interventions" were being taken, what "measurable impact" the platforms' moderation policies had, "how much content [was] being demoted," and what "misinformation" was not being downgraded. In one instance, that official lamented that flagging did not "historically mean[] that [a post] was removed." In another, the same official told a platform that they had "been asking [] pretty directly, over a series of conversations" for "what actions [the platform has] been taking to mitigate" vaccine hesitancy, to end the platform's "shell game," and that they were "gravely concerned" the platform was "one of the top drivers of vaccine hesitancy." Another time, an official asked why a flagged post was "still up" as it had "gotten pretty far." The official queried "how does something like that happen," and maintained that "I don't think our position is that you should remove vaccine hesitant stuff," but "slowing it down seems reasonable." Always, the officials asked for more data and stronger "intervention[s]."

From the beginning, the platforms cooperated with the White House. One company made an employee "available on a regular basis," and another gave the officials access to special tools like a "Partner Support Portal" which "ensure[d]" that their requests were "prioritized automatically." They all attended regular meetings. But, once White House officials began to demand more from the platforms, they seemingly stepped-up their efforts to appease the officials. When there was confusion, the platforms would call to "clear up" any

"misunderstanding[s]" and provide data detailing their moderation activities. When there was doubt, they met with the officials, tried to "partner" with them, and assured them that they were actively trying to "remove the most harmful COVID-19 misleading information." At times, their responses bordered on capitulation. One platform employee, when pressed about not "level[ing]" with the White House, told an official that he would "continue to do it to the best of [his] ability, and [he will] expect [the official] to hold [him] accountable." Similarly, that platform told the Surgeon General that "[w]e're [] committed to addressing the [] misinformation that you've called on us to address." The platforms were apparently eager to stay in the officials' good graces. For example, in an effort to get ahead of a negative news story, Facebook preemptively reached out to the White House officials to tell them that the story "doesn't accurately represent the problem or the solutions we have put in place."

The officials were often unsatisfied. They continued to press the platforms on the topic of misinformation throughout 2021, especially when they seemingly veered from the officials' preferred course. When Facebook did not take a prominent pundit's "popular post[]" down, a White House official asked "what good is" the reporting system, and signed off with "last time we did this dance, it ended in an insurrection." In another message, an official sent Facebook a Washington Post article detailing the platform's alleged failures to limit misinformation with the statement "[y]ou are hiding the ball." A day later, a second official replied that they felt Facebook was not "trying to solve the problem" and the White House was "[i]nternally ... considering our options on what to do about it." In another instance, an official—demanding "assurances" that a platform was taking action—likened the platform's alleged inaction to the 2020 election, which it "helped increase skepticism in, and an insurrection which was plotted, in large part, on your platform."

To ensure that problematic content was being taken down, the officials—via meetings and emails—pressed the platforms to change their moderation policies. For example, one official emailed Facebook a document recommending changes to the platform's internal policies, including to its deplatforming and downgrading systems, with the note that "this is circulating around the building and informing thinking." In another instance, the Surgeon General asked the platforms to take part in an "all-of-society" approach to COVID by implementing stronger misinformation "monitoring" programs, redesigning their algorithms to "avoid amplifying misinformation," targeting "repeat offenders," "[a]mplify[ing] communications from trusted ... experts," and "[e]valuat[ing] the effectiveness of internal policies."

The platforms apparently yielded. They not only continued to take down content the officials flagged, and provided requested data to the White House, but they also changed their moderation policies expressly in accordance with the officials' wishes. For example, one platform said it knew its "position on [misinformation] continues to be a particular concern" for the White House, and said it was "making a number of changes" to capture and downgrade a "broader set" of flagged content. The platform noted that, in line with the officials' requests, it would "make sure that these additional [changes] show results—the stronger demotions in particular should deliver real impact." Another time, a platform represented that it was going to change its moderation policies and activities to fit with express guidance from the CDC and other federal officials. Similarly, one platform noted that it was taking down

flagged content which seemingly was not barred under previous iterations of its moderation policy.

Relatedly, the platforms enacted several changes that coincided with the officials' aims shortly after meeting with them. For example, one platform sent out a post-meeting list of "commitments" including a policy "change[]" "focused on reducing the virality" of anti-vaccine content even when it "does not contain actionable misinformation." On another occasion, one platform listed "policy updates ... regarding repeat misinformation" after meeting with the Surgeon General's office and signed off that "[w]e think there's considerably more we can do in partnership with you and your teams to drive behavior."

Even when the platforms did not expressly adopt changes, though, they removed flagged content that did not run afoul of their policies. For example, one email from Facebook stated that although a group of posts did not "violate our community standards," it "should have demoted them before they went viral." In another instance, Facebook recognized that a popular video did not qualify for removal under its policies but promised that it was being "labeled" and "demoted" anyway after the officials flagged it.

At the same time, the platforms often boosted the officials' activities at their request. For example, for a vaccine "roll out," the officials shared "what [t]he admin's plans are" and "what we're seeing as the biggest headwinds" that the platforms could help with. The platforms "welcome[d] the opportunity" to lend a hand. Similarly, when a COVID vaccine was halted, the White House asked a platform to—through "hard ... intervention[s]" and "algorithmic amplification"—"make sure that a favorable review reaches as many people" as possible to stem the spread of alleged misinformation. The officials also asked for labeling of posts and a 24-hour "report-back" period to monitor the public's response. Again, the platforms obliged—they were "keen to amplify any messaging you want us to project," i.e., "the right messages." Another time, a platform told the White House it was "eager" to help with vaccine efforts, including by "amplify[ing]" content. Similarly, a few months later, after the White House shared some of the "administration's plans" for vaccines in an industry meeting, Facebook reiterated that it was "committed to the effort of amplifying the rollout of [those] vaccines."

Still, White House officials felt the platforms were not doing enough. One told a platform that it "remain[ed] concerned" that the platform was encouraging vaccine hesitancy, which was a "concern that is shared at the highest (and I mean highest) levels of the [White House]." So, the official asked for the platform's "road map to improvement" and said it would be "good to have from you all ... a deeper dive on [misinformation] reduction." Another time, the official responded to a moderation report by flagging a user's account and saying it is "[h]ard to take any of this seriously when you're actively promoting anti-vaccine pages." The platform subsequently "removed" the account "entirely" from its site, detailed new changes to the company's moderation policies, and told the official that "[w]e clearly still have work to do." The official responded that "removing bad information" is "one of the easy, low-bar things you guys [can] do to make people like me think you're taking action." The official emphasized that other platforms had "done pretty well" at demoting non-sanctioned information, and said "I don't know why you guys can't figure this out."

The officials' frustrations reached a boiling point in July of 2021. That month, in a

joint press conference with the Surgeon General's office, the White House Press Secretary said that the White House "expect[s] more" from the platforms, including that they "consistently take action against misinformation" and "operate with greater transparency and accountability." Specifically, the White House called on platforms to adopt "proposed changes," including limiting the reach of "misinformation," creating a "robust enforcement strategy," taking "faster action" because they were taking "too long," and amplifying "quality information." The Press Secretary said that the White House "engag[es] with [the platforms] regularly and they certainly understand what our asks are." She also expressly noted that several accounts, despite being flagged by the White House, "remain active" on a few platforms.

The Surgeon General also spoke at the press conference. He said the platforms were "one of the biggest obstacles" to controlling the COVID pandemic because they had "enabled misinformation to poison" public discourse and "have extraordinary reach." He labeled social-media-based misinformation an "urgent public health threat[]" that was "literally costing ... lives." He asked social-media companies to "operate with greater transparency and accountability," "monitor misinformation more closely," and "consistently take action against misinformation super-spreaders on their platforms." The Surgeon General contemporaneously issued a public advisory "calling out social media platforms" and saying they "have a role to play to improve [] health outcomes." The next day, President Biden said that the platforms were "killing people" by not acting on misinformation. Then, a few days later, a White House official said they were "reviewing" the legal liability of platforms—noting "the president speak[s] very aggressively about" that—because "they should be held accountable."

The platforms responded with total compliance. Their answer was four-fold. First, they capitulated to the officials' allegations. The day after the President spoke, Facebook asked what it could do to "get back to a good place" with the White House. It sought to "better understand ... what the White House expects from us on misinformation going forward." Second, the platforms changed their internal policies. Facebook reached out to see "how we can be more transparent," comply with the officials' requests, and "deescalate" any tension. Others fell in line, too—YouTube and Google told an official that they were "working on [it]" and relayed the "steps they are currently taking" to do better. A few days later, Facebook told the Surgeon General that "[w]e hear your call for us to do more," and wanted to "make sure [he] saw the steps [it took]" to "adjust policies on what we are removing with respect to misinformation," including "expand[ing] the group of false claims" that it removes. That included the officials' "specific recommendations for improvement," and the platform "want[ed] to make sure to keep [the Surgeon General] informed of [its] work on each."

Third, the platforms began taking down content and deplatforming users they had not previously targeted. For example, Facebook started removing information posted by the "disinfo dozen"—a group of influencers identified as problematic by the White House—despite earlier representations that those users were not in violation of their policies. In general, the platforms had pushed back against deplatforming users in the past, but that changed. Facebook also made other pages that "had not yet met their removal thresholds[] more difficult to find on our platform," and promised to send updates and take more

action. A month later, members of the disinfo dozen were deplatformed across several sites. Fourth, the platforms continued to amplify or assist the officials' activities, such as a vaccine "booster" campaign.

Still, the White House kept the pressure up. Officials continuously expressed that they would keep pushing the platforms to act. And, in the following year, the White House Press Secretary stressed that, in regard to problematic users on the platforms, the "President has long been concerned about the power of large" social media companies and that they "must be held accountable for the harms they cause." She continued that the President "has been a strong supporter of fundamental reforms to achieve that goal, including reforms to [S]ection 230, enacting antitrust reforms, requiring more transparency, and more." Per the officials, their back-and-forth with the platforms continues to this day.

B.

Next, we turn to the CDC. Much like the White House officials, the CDC tried to "engage on a [] regular basis" with the platforms. They also received reports on the platforms' moderation activities and policy updates. And, like the other officials, the CDC also flagged content for removal that was subsequently taken down. In one email, an official mentioned sixteen posts and stated, "[W]e are seeing a great deal of misinfo [] that we wanted to flag for you all." In another email, CDC officials noted that flagged content had been removed. And, the CDC actively sought to promote its officials' views over others. For example, they asked "what [was] being done on the amplification-side" of things.

Unlike the other officials, though, the CDC officials also provided direct guidance to the platforms on the application of the platforms' internal policies and moderation activities. They did so in three ways. First, CDC officials authoritatively told the platforms what was (and was not) misinformation. For example, in meetings—styled as "Be On the Lookout" alerts—officials educated the platforms on "misinformation[] hot topics." Second, CDC officials asked for, or at least encouraged, harmonious changes to the platforms' moderation policies. One platform noted that "[a]s soon as the CDC updates [us]," it would change information on its website to comply with the officials' views. In that same email, the platform said it was expanding its "misinfo policies" and it was "able to make this change based on the conversation we had last week with the CDC." In another email, a platform noted "several updates to our COVID-19 Misinformation and Harm policy based on your inputs." Third, through its guidance, the CDC outright directed the platforms to take certain actions. In one post-meeting email, an official said that "as mentioned on the call, any contextual information that can be added to posts" on some alleged "disinformation" "could be very effective."

Ultimately, the CDC's guidance informed, if not directly affected, the platforms' moderation decisions. The platforms sought answers from the officials as to whether certain controversial claims were "true or false" and whether related posts should be taken down as misleading. The CDC officials obliged, directing the platforms as to what was or was not misinformation. Such designations directly controlled the platforms' decision-making process for the removal of content. One platform noted that "[t]here are several claims that we will be able to remove as soon as the CDC debunks them; until then, we are unable to remove them."

C.

Next, we consider the conduct of the FBI officials. The agency's officials regularly

met with the platforms at least since the 2020 election. In these meetings, the FBI shared "strategic information with [] social-media companies" to alert them to misinformation trends in the lead-up to federal elections. For example, right before the 2022 congressional election, the FBI tipped the platforms off to "hack and dump" operations from "state-sponsored actors" that would spread misinformation through their sites. In another instance, they alerted the platforms to the activities and locations of "Russian troll farms." The FBI apparently acquired this information from ongoing investigations.

Per their operations, the FBI monitored the platforms' moderation policies, and asked for detailed assessments during their regular meetings. The platforms apparently changed their moderation policies in response to the FBI's debriefs. For example, some platforms changed their "terms of service" to be able to tackle content that was tied to hacking operations.

But, the FBI's activities were not limited to purely foreign threats. In the build up to federal elections, the FBI set up "command" posts that would flag concerning content and relay developments to the platforms. In those operations, the officials also targeted domestically sourced "disinformation" like posts that stated incorrect poll hours or mail-in voting procedures. Apparently, the FBI's flagging operations across-the-board led to posts being taken down 50% of the time.

D.

Next, we look at CISA. CISA—working in close connection with the FBI—held regular industry meetings with the platforms concerning their moderation policies, pushing them to adopt CISA's proposed practices for addressing "mis-, dis-, and mal-information." CISA also engaged in "switchboarding" operations, meaning, at least in theory, that CISA officials acted as an intermediary for third parties by forwarding flagged content from them to the platforms. For example, during a federal election, CISA officials would receive "something on social media that [local election officials] deemed to be disinformation aimed at their jurisdiction" and, in turn, CISA would "share [that] with the appropriate social media compan[y]." But, CISA's role went beyond mere information sharing. Like the CDC for COVID-related claims, CISA told the platforms whether certain election-related claims were true or false. CISA's actions apparently led to moderation policies being altered and content being removed or demoted by the recipient platforms.

E.

Finally, we briefly discuss the remaining offices, namely the NIAID and the State Department. Generally speaking, the NIAID did not have regular contact with the platforms or flag content. Instead, NIAID officials were—as evidenced by internal emails—concerned with "tak[ing] down" (i.e., discrediting) opposing scientific or policy views. On that front, Director Anthony Fauci publicly spoke in favor of certain ideas (e.g., COVID lockdowns) and against others (e.g., the lab-leak theory). In doing so, NIAID officials appeared on podcasts and livestreams on some of the platforms. Apparently, the platforms subsequently demoted posts that echoed or supported the discredited views.

The State Department, on the other hand, communicated directly with the platforms. It hosted meetings that were meant to "facilitate [] communication" with the platforms. In those meetings, it educated the platforms on the "tools and techniques" that "malign" or "foreign propaganda actors" (e.g., terrorist groups, China)

were using to spread misinformation. Generally, the State Department officials did not flag content, suggest policy changes, or reciprocally receive data during those meetings.

* * *

Relying on the above record, the district court concluded that the officials, via both private and public channels, asked the platforms to remove content, pressed them to change their moderation policies, and threatened them—directly and indirectly—with legal consequences if they did not comply. And, it worked—that "unrelenting pressure" forced the platforms to act and take down users' content. Notably, though, those actions were not limited to private actors. Accounts run by state officials were often subject to censorship, too. For example, one platform removed a post by the Louisiana Department of Justice—which depicted citizens testifying against public policies regarding COVID—for violating its "medical misinformation policy" by "spread[ing] medical misinformation." In another instance, a platform took down a Louisiana state legislator's post discussing COVID vaccines. Similarly, one platform removed several videos, namely testimonials regarding COVID, posted by St. Louis County. So, the district court reasoned, the Plaintiffs were "likely to succeed" on their claim because when the platforms moderated content, they were acting under the coercion (or significant encouragement) of government officials, in violation of the First Amendment, at the expense of both private and governmental actors.

In addition, the court found that considerations of equity weighed in favor of an injunction because of the clear need to safeguard the Plaintiffs' First Amendment rights. Finally, the court ruled that the Plaintiffs had standing to bring suit under several different theories, including direct First Amendment censorship and, for the State Plaintiffs, quasi-sovereign interests as well. Consequently, the district court entered an injunction against the officials barring them from an assortment of activities, including "meeting with," "communicat[ing]" with, or "flagging content" for social-media companies "for the purpose of urging, encouraging, pressuring, or inducing in any manner the removal, deletion, suppression, or reduction of content containing protected free speech." The officials appeal.

II.

We review the district court's standing determination de novo. Freedom Path, Inc. v. Internal Revenue Serv., 913 F.3d 503, 507 (5th Cir. 2019). "We review a preliminary injunction for abuse of discretion, reviewing findings of fact for clear error and conclusions of law de novo. Whether an injunction fulfills the mandates of Fed. R. Civ. P. 65(d) is a question of law we review de novo." Louisiana v. Biden, 45 F.4th 841, 845 (5th Cir. 2022) (internal quotation marks and citation omitted).

III.

We begin with standing. To establish Article III standing, the Plaintiffs bear the burden to show "[1] an injury in fact [2] that is fairly traceable to the challenged action of the defendant and [3] likely to be redressed by [their] requested relief." Stringer v. Whitley, 942 F.3d 715, 720 (5th Cir. 2019) (citing Lujan v. Defs. of Wildlife, 504 U.S. 555, 560-61, 112 S.Ct. 2130, 119 L.Ed.2d 351 (1992)). Because the Plaintiffs seek injunctive relief, the injury-in-fact and redressability requirements "intersect[]" and therefore the Plaintiffs must "demonstrat[e] a continuing injury or threatened future injury," not a past one. Id. "At the preliminary injunction stage, the movant must clearly show only that

each element of standing is likely to obtain in the case at hand." Speech First, Inc. v. Fenves, 979 F.3d 319, 330 (5th Cir. 2020) (citations omitted). The presence of any one plaintiff with standing to pursue injunctive relief as to the Plaintiffs' First-Amendment claim satisfies Article III's case-or-controversy requirement. Rumsfeld v. F. for Acad. & Institutional Rts., Inc., 547 U.S. 47, 52 n.2, 126 S.Ct. 1297, 164 L.Ed.2d 156 (2006).

A.

An injury-in-fact is "'an invasion of a legally protected interest' that is 'concrete and particularized' and 'actual or imminent, not conjectural or hypothetical.'" Spokeo, Inc. v. Robins, 578 U.S. 330, 339, 136 S.Ct. 1540, 194 L.Ed.2d 635 (2016) (quoting Lujan, 504 U.S. at 560, 112 S.Ct. 2130). "For a threatened future injury to satisfy the imminence requirement, there must be at least a 'substantial risk' that the injury will occur." Crawford v. Hinds Cnty. Bd. of Supervisors, 1 F.4th 371, 375 (5th Cir. 2021) (quoting Stringer, 942 F.3d at 721). Past harm can constitute an injury-in-fact for purposes of pursuing injunctive relief if it causes "continuing, present adverse effects." City of Los Angeles v. Lyons, 461 U.S. 95, 102, 103 S.Ct. 1660, 75 L.Ed.2d 675 (1983) (quoting O'Shea v. Littleton, 414 U.S. 488, 495-96, 94 S.Ct. 669, 38 L.Ed.2d 674 (1974)). Otherwise, "'[p]ast wrongs are evidence' of the likelihood of a future injury but 'do not in themselves amount to that real and immediate threat of injury necessary to make out a case or controversy.'" Crawford, 1 F.4th at 375 (quoting Lyons, 461 U.S. at 102-03, 103 S.Ct. 1660) (alteration adopted).

Each of the Individual Plaintiffs has shown past injury-in-fact. Bhattacharya's and Kulldorff's sworn declarations allege that their article, the Great Barrington Declaration, which was critical of the government's COVID-related policies such as lockdowns, was "deboosted" in Google search results and removed from Facebook and Reddit, and that their roundtable discussion with Florida Governor Ron DeSantis concerning mask requirements in schools was removed from YouTube. Kulldorff also claimed censorship of his personal Twitter and LinkedIn accounts due to his opinions concerning vaccine and mask mandates; both accounts were suspended (although ultimately restored). Kheriaty, in his sworn declaration, attested to the fact that his Twitter following was "artificially suppressed" and his posts "shadow bann[ed]" so that they did not appear in his followers' feeds due to his views on vaccine mandates and lockdowns, and that a video of one of his interviews concerning vaccine mandates was removed from YouTube (but ultimately re-posted). Hoft—founder, owner, and operator of news website The Gateway Pundit—submitted a sworn declaration averring that The Gateway Pundit's Twitter account was suspended and then banned for its tweets about vaccine mandates and election fraud, its Facebook posts concerning COVID-19 and election security were either banned or flagged as false or misinformation, and a YouTube video concerning voter fraud was removed. Hoft's declaration included photographic proof of the Twitter and Facebook censorship he had suffered. And Hines's declaration swears that her personal Facebook account was suspended and the Facebook posts of her organization, Health Freedom Louisiana, were censored and removed for their views on vaccine and mask mandates.

The officials do not contest that these past injuries occurred. Instead, they argue that the Individual Plaintiffs have failed to demonstrate that the harm from these past injuries is ongoing or that similar injury is likely to reoccur in the future, as

required for standing to pursue injunctive relief. We disagree with both assertions.

All five Individual Plaintiffs have stated in sworn declarations that their prior censorship has caused them to self-censor and carefully word social-media posts moving forward in hopes of avoiding suspensions, bans, and censorship in the future. Kulldorff, for example, explained that he now "restrict[s] what [he] say[s] on social-media platforms to avoid suspension and other penalties." Kheriaty described how he now must be "extremely careful when posting any information on Twitter related to the vaccines, to avoid getting banned" and that he intentionally "limit[s] what [he] say[s] publicly," even "on topics where [he] ha[s] specific scientific and ethical expertise and professional experience." And Hoft notes that, "[t]o avoid suspension and other forms of censorship, [his website] frequently avoid[s] posting content that [it] would otherwise post on social-media platforms, and [] frequently alter[s] content to make it less likely to trigger censorship policies." These lingering effects of past censorship must be factored into the standing calculus. See Lyons, 461 U.S. at 102, 103 S.Ct. 1660.

As the Supreme Court has recognized, this chilling of the Individual Plaintiffs' exercise of their First Amendment rights is, itself, a constitutionally sufficient injury. See Laird v. Tatum, 408 U.S. 1, 11, 92 S.Ct. 2318, 33 L.Ed.2d 154 (1972). True, "to confer standing, allegations of chilled speech or self-censorship must arise from a fear of [future harm] that is not imaginary or wholly speculative." Zimmerman v. City of Austin, Tex., 881 F.3d 378, 390 (5th Cir. 2018) (internal quotation marks and citation omitted); see also Clapper v. Amnesty Int'l USA, 568 U.S. 398, 416, 133 S.Ct. 1138, 185 L.Ed.2d 264 (2013) (Plaintiffs "cannot manufacture standing merely by inflicting harm on themselves based on their fears of hypothetical future harm"). But the fears motivating the Individual Plaintiffs' self-censorship, here, are far from hypothetical. Rather, they are grounded in the very real censorship injuries they have previously suffered to their speech on social media, which are "evidence of the likelihood of a future injury." Crawford, 1 F.4th at 375 (internal quotation marks and citation omitted). Supported by this evidence, the Individual Plaintiffs' self-censorship is a cognizable, ongoing harm resulting from their past censorship injuries, and therefore constitutes injury-in-fact upon which those Plaintiffs may pursue injunctive relief. Lyons, 461 U.S. at 102, 103 S.Ct. 1660.

Separate from their ongoing harms, the Individual Plaintiffs have shown a substantial risk that the injuries they suffered in the past will reoccur. The officials suggest that there is no threat of future injury because "Twitter has stopped enforcing its COVID-related misinformation policy." But this does nothing to mitigate the risk of future harm to the Individual Plaintiffs. Twitter continues to enforce a robust general misinformation policy, and the Individual Plaintiffs seek to express views—and have been censored for their views—on topics well beyond COVID-19, including allegations of election fraud and the Hunter Biden laptop story. Plaintiffs use social-media

Notably, Twitter maintains a separate "crisis misinformation policy" which applies to "public health emergencies." Crisis misinformation policy, TWITTER (August 2022), https://help.twitter.com/en/rules-and-policies/crisis-misinformation. This policy would presumably apply to COVID-related misinformation if COVID-19 were again classified as a Public Health Emergency, as it was until May 11, 2023. See End of the Federal COVID-19 Public Health Emergency (PHE) Declaration, CTRS. FOR DISEASE CONTROL & PREVENTION (May 5, 2023), https://www.cdc.gov/coronavirus/2019-ncov/your-health/end-of-phe.html.

platforms other than Twitter—such as Facebook and YouTube—which still enforce COVID-or health-specific misinformation policies. And most fundamentally, the Individual Plaintiffs are not seeking to enjoin Twitter's content moderation policies (or those of any other social-media platform, for that matter). Rather, as Plaintiffs' counsel made clear at oral argument, what the Individual Plaintiffs are challenging is the government's interference with those social-media companies' independent application of their policies. And there is no evidence to suggest that the government's meddling has ceased. To the contrary, the officials' attorney conceded at oral argument that they continue to be in regular contact with social-media platforms concerning content-moderation issues today.

Facebook Community Standards: Misinformation, META, https://transparency.fb.com/policies/community-standards/misinformation/ (last visited August 11, 2023); Misinformation policies, YOUTUBE, https://support.google.com/youtube/topic/10833358 (last visited August 11, 2023).

The officials also contend that future harm is unlikely because "all three plaintiffs who suggested that their social-media accounts had been permanently suspended in the past now appear to have active accounts." But as the Ninth Circuit recently recognized, this fact weighs in Plaintiffs' favor. In O'Handley v. Weber, considering this issue in the context of redressability, the Ninth Circuit explained:

When plaintiffs seek injunctive relief, the injury-in-fact and redressability requirements intersect. Stringer, 942 F.3d at 720. So, it makes no difference that the Ninth Circuit addressed the issue of reinstated social-media accounts in its redressability analysis while we address it as part of injury-in-fact. The ultimate question is whether there was a sufficient threat of future injury to warrant injunctive relief.

Until recently, it was doubtful whether [injunctive] relief would remedy [the plaintiff]'s alleged injuries because Twitter had permanently suspended his account, and the requested injunction [against government-imposed social-media censorship] would not change that fact. Those doubts disappeared in December 2022 when Twitter restored his account.

62 F.4th 1145, 1162 (9th Cir. 2023). The same logic applies here. If the Individual Plaintiffs did not currently have active social-media accounts, then there would be no risk of future government-coerced censorship of their speech on those accounts. But since the Individual Plaintiffs continue to be active speakers on social media, they continue to face the very real and imminent threat of government-coerced social-media censorship.

Because the Individual Plaintiffs have demonstrated ongoing harm from their past censorship as well as a substantial risk of future harm, they have established an injury-in-fact sufficient to support their request for injunctive relief.

B.

Turning to the second element of Article III standing, the Individual Plaintiffs were also required to show that their injuries were "fairly traceable" to the challenged conduct of the officials. Stringer, 942 F.3d at 720. When, as is alleged here, the "causal relation between [the claimed] injury and [the] challenged action depends upon the decision of an independent third party ... standing is not precluded, but it is ordinarily substantially more difficult to establish." California v. Texas, — U.S. —, 141 S. Ct. 2104, 2117, 210 L.Ed.2d 230 (2021) (internal quotation marks and citation omitted). "To satisfy that burden,

the plaintiff[s] must show at the least 'that third parties will likely react in predictable ways.'" Id. (quoting Dep't of Com. v. New York, — U.S. —, 139 S. Ct. 2551, 2566, 204 L.Ed.2d 978 (2019)).

The officials contend that traceability is lacking because the Individual Plaintiffs' censorship was a result of "independent decisions of social-media companies." This conclusion, they say, is a matter of timing: social-media platforms implemented content-moderation policies in early 2020 and therefore the Biden Administration—which took office in January 2021—"could not be responsible for [any resulting] content moderation." But as we just explained, the Individual Plaintiffs do not challenge the social-media platforms' content-moderation policies. So, the fact that the Individual Plaintiffs' censorship can be traced back, at least in part, to third-party policies that pre-date the current presidential administration is irrelevant. The dispositive question is whether the Individual Plaintiffs' censorship can also be traced to government-coerced enforcement of those policies. We agree with the district court that it can be.

On this issue, Department of Commerce is instructive. There, a group of plaintiffs brought a constitutional challenge against the federal government's decision to reinstate a citizenship question on the 2020 census. 139 S. Ct. at 2561. Their theory of harm was that, as a result of this added question, noncitizen households would respond to the census at lower rates than citizen households due to fear of immigration-related consequences, which would, in turn, lead to undercounting of population in certain states and a concomitant diminishment in political representation and loss of federal funds. Id. at 2565-66. In response, the government presented many of the same causation arguments raised here, contending that any harm to the plaintiffs was "not fairly traceable to the [government]'s decision" but rather "depend[ed] on the independent action of third parties" (there, noncitizens refusing to respond to the census; here, social-media companies censoring posts) which "would be motivated by unfounded fears that the Federal Government will itself break the law" (there, "using noncitizens' answers against them for law enforcement purposes"; here, retaliatory enforcement actions or regulatory reform). Id. But a unanimous Supreme Court disagreed. As the Court explained, the plaintiffs had "met their burden of showing that third parties will likely react in predictable ways to the citizenship question" because evidence "established that noncitizen households have historically responded to the census at lower rates than other groups" and the district court had "not clearly err[ed] in crediting the ... theory that the discrepancy [was] likely attributable at least in part to noncitizens' reluctance to answer a citizenship question." Id. at 2566.

That logic is directly applicable here. The Individual Plaintiffs adduced extensive evidence that social-media platforms have engaged in censorship of certain viewpoints on key issues and that the government has engaged in a years-long pressure campaign designed to ensure that the censorship aligned with the government's preferred viewpoints. The district court did not clearly err in crediting the Individual Plaintiffs' theory that the social-media platforms' censorship decisions were likely attributable at least in part to the platforms' reluctance to risk the adverse legal or regulatory consequences that could result from a refusal to adhere to the government's directives. The Individual Plaintiffs therefore met their burden of showing that the social-media platforms will likely react in a predictable way—i.e., censoring

speech—in response to the government's actions.

To be sure, there were instances where the social-media platforms declined to remove content that the officials had identified for censorship. But predictability does not require certainty, only likelihood. See Dep't of Com., 139 S. Ct. at 2566 (requiring that third parties "will likely react in predictable ways"). Here, the Individual Plaintiffs presented extensive evidence of escalating threats—both public and private—by government officials aimed at social-media companies concerning their content-moderation decisions. The district court thus had a sound basis upon which to find a likelihood that, faced with unrelenting pressure from the most powerful office in the world, social-media platforms did, and would continue to, bend to the government's will. This determination was not, as the officials contend, based on "unadorned speculation." Rather, it was a logical conclusion based directly on the evidence adduced during preliminary discovery.

C.

The final element of Article III standing—redressability—required the Individual Plaintiffs to demonstrate that it was "likely, as opposed to merely speculative, that the [alleged] injury will be redressed by a favorable decision." Lujan, 504 U.S. at 561, 112 S.Ct. 2130 (internal quotation marks and citation omitted). The redressability analysis focuses on "the relationship between the judicial relief requested and the injury" alleged. California, 141 S. Ct. at 2115 (internal quotation marks and citation omitted).

Beginning first with the injury alleged, we have noted multiple times now an important distinction between censorship as a result of social-media platforms' independent application of their content-moderation policies, on the one hand, and censorship as a result of social-media platforms' government-coerced application of those policies, on the other. As Plaintiffs' counsel made clear at oral argument, the Individual Plaintiffs seek to redress the latter injury, not the former.

The Individual Plaintiffs have not sought to invalidate social-media companies' censorship policies. Rather, they asked the district court to restrain the officials from unlawfully interfering with the social-media companies' independent application of their content-moderation policies. As the Ninth Circuit has also recognized, there is a direct relationship between this requested relief and the injury alleged such that redressability is satisfied. See O'Handley, 62 F.4th at 1162.

D.

We also conclude that the State Plaintiffs are likely to establish direct standing. First, state officials have suffered, and will likely continue to suffer, direct censorship on social media. For example, the Louisiana Department of Justice posted a video showing Louisiana citizens testifying at the State Capitol and questioning the efficacy of COVID-19 vaccines and mask mandates. But one platform removed the video for spreading alleged "medical misinformation" and warned that any subsequent violations would result in suspension of the state's account. The state thereafter modified its practices for posting on social media for fear of future censorship injury.

The State Plaintiffs also contend that they have parens patriae standing. We do not consider this alternative argument.

Similarly, another platform took down a Louisiana state legislator's post discussing

COVID vaccines. And several videos posted by St. Louis County showing residents discussing COVID policies were removed, too. Acts of this nature continue to this day. In fact, at oral argument, counsel for the State of Louisiana explained that YouTube recently removed a video of counsel, speaking in his official capacity, criticizing the federal government's alleged unconstitutional censorship in this case.

These actions are not limited to the State Plaintiffs. On the contrary, other states' officials have offered evidence of numerous other instances where their posts were removed, restricted, or otherwise censored.

These acts of censorship confer standing for substantially the same reasons as those discussed for the Individual Plaintiffs. That is, they constitute an ongoing injury, and demonstrate a likelihood of future injury, traceable to the conduct of the federal officials and redressable by an injunction against them.

The federal officials admit that these instances of censorship occurred but deny that the State Plaintiffs have standing based on the assertion that "the First Amendment does not confer rights on States." But the Supreme Court has made clear that the government (state and otherwise) has a "right" to speak on its own behalf. Bd. of Regents of Univ. of Wis. Sys. v. Southworth, 529 U.S. 217, 229, 120 S.Ct. 1346, 146 L.Ed.2d 193 (2000); see also Walker v. Tex. Div., Sons of Confederate Veterans, Inc., 576 U.S. 200, 207-08, 135 S.Ct. 2239, 192 L.Ed.2d 274 (2015). Perhaps that right derives from a state's sovereign nature, rather than from the First Amendment itself. But regardless of the source of the right, the State Plaintiffs sustain a direct injury when the social-media accounts of state officials are censored due to federal coercion.

Federally coerced censorship harms the State Plaintiffs' ability to listen to their citizens as well. This right to listen is "reciprocal" to the State Plaintiffs' right to speak and constitutes an independent basis for the State Plaintiffs' standing here. Va. State Bd. of Pharm. v. Va. Citizens Consumer Council, 425 U.S. 748, 757, 96 S.Ct. 1817, 48 L.Ed.2d 346 (1976).

Officials from the States of Missouri and Louisiana testified that they regularly use social media to monitor their citizens' concerns. As explained by one Louisiana official:

[M]ask and vaccine mandates for students have been a very important source of concern and public discussion by Louisiana citizens over the last year. It is very important for me to have access to free public discourse on social media on these issues so I can understand what our constituents are actually thinking, feeling, and expressing about such issues, and so I can communicate properly with them.

And a Missouri official testified to several examples of critical speech on an important topic that he was not able to review because it was censored:

[O]ne parent who posted on nextdoor.com (a neighborhood networking site operated by Facebook) an online petition to encourage his school to remain mask-optional found that his posts were quietly removed without notifying him, and his online friends never saw them. Another parent in the same school district who objected to mask mandates for schoolchildren responded to Dr. Fauci on Twitter, and promptly received a warning from Twitter that his account would be banned if he did not delete the tweets criticizing Dr. Fauci's approach to mask mandates. These examples are just the sort of online speech by Missourians

that it is important for me and the Missouri Attorney General's Office to be aware of.

The Government does not dispute that the State Plaintiffs have a crucial interest in listening to their citizens. Indeed, the CDC's own witness explained that if content were censored and removed from social-media platforms, government communicators would not "have the full picture" of what their citizens' true concerns are. So, when the federal government coerces or substantially encourages third parties to censor certain viewpoints, it hampers the states' right to hear their constituents and, in turn, reduces their ability to respond to the concerns of their constituents. This injury, too, means the states likely have standing. See Va. State Bd. of Pharm., 425 U.S. at 757, 96 S.Ct. 1817.

* * *

The Plaintiffs have standing because they have demonstrated ongoing harm from past social-media censorship and a likelihood of future censorship, both of which are injuries traceable to government-coerced enforcement of social-media platforms' content-moderation policies and redressable by an injunction against the government officials. We therefore proceed to the merits of Plaintiffs' claim for injunctive relief.

The Individual Plaintiffs' standing and the State Plaintiffs' standing provide independent bases upon which the Plaintiffs' injunctive-relief claim may proceed since there need be only one plaintiff with standing to satisfy the requirements of Article III. Rumsfeld, 547 U.S. at 52 n.2, 126 S.Ct. 1297.

IV.

A party seeking a preliminary injunction must establish that (1) they are likely to succeed on the merits, (2) there is a "substantial threat" they will suffer an "irreparable injury" otherwise, (3) the potential injury "outweighs any harm that will result" to the other side, and (4) an injunction will not "disserve the public interest." Atchafalaya Basinkeeper v. U.S. Army Corps of Eng'rs, 894 F.3d 692, 696 (5th Cir. 2018) (citing La Union Del Pueblo Entero v. FEMA, 608 F.3d 217, 219 (5th Cir. 2010)). Of course, a "preliminary injunction is an extraordinary remedy," meaning it should not be entered lightly. Id.

We start with likelihood of success. The Plaintiffs allege that federal officials ran afoul of the First Amendment by coercing and significantly encouraging "social-media platforms to censor disfavored [speech]," including by "threats of adverse government action" like antitrust enforcement and legal reforms. We agree.

A.

The government cannot abridge free speech. U.S. CONST. amend. I. A private party, on the other hand, bears no such burden—it is "not ordinarily constrained by the First Amendment." Manhattan Cmty. Access Corp. v. Halleck, — U.S. —, 139 S. Ct. 1921, 1930, 204 L.Ed.2d 405 (2019). That changes, though, when a private party is coerced or significantly encouraged by the government to such a degree that its "choice"—which if made by the government would be unconstitutional, Norwood v. Harrison, 413 U.S. 455, 465, 93 S.Ct. 2804, 37 L.Ed.2d 723 (1973)—"must in law be deemed to be that of the State." Blum v. Yaretsky, 457 U.S. 991, 1004, 102 S.Ct. 2777, 73 L.Ed.2d 534 (1982); Barnes v. Lehman, 861 F.2d 1383, 1385-86 (5th Cir. 1988). This is known as

That makes sense: First Amendment rights "are protected not only against heavy-handed frontal attack, but also from being stifled by more subtle governmental interference." Bates v. City of Little Rock, 361 U.S. 516, 523, 80 S.Ct. 412, 4 L.Ed.2d 480 (1960).

the close nexus test.

Note that, at times, we have called this test by a few other names. See, e.g., Frazier v. Bd. of Trustees of Nw. Miss. Reg'l Med. Ctr., 765 F.2d 1278, 1284 (5th Cir. 1985) ("the fair attribution test"); Bass v. Parkwood Hosp., 180 F.3d 234, 242 (5th Cir. 1999) ("The state compulsion (or coercion) test"). We settle that dispute now—it is the close nexus test. Am. Mfrs., 526 U.S. at 52, 119 S.Ct. 977 (a "close nexus" is required). In addition, some of our past decisions have confused this test with the joint action test, see Bass, 180 F.3d at 242, but the two are separate tests with separate considerations.

Under that test, we "begin[] by identifying 'the specific conduct of which the plaintiff complains.'" Am. Mfrs. Mut. Ins. Co. v. Sullivan, 526 U.S. 40, 51, 119 S.Ct. 977, 143 L.Ed.2d 130 (1999) (quoting Blum, 457 U.S. at 1004, 102 S.Ct. 2777 ("Faithful adherence to the 'state action' requirement ... requires careful attention to the gravamen of the plaintiff's complaint.")). Then, we ask whether the government sufficiently induced that act. Not just any coaxing will do, though. After all, "the government can speak for itself," which includes the right to "advocate and defend its own policies." Southworth, 529 U.S. at 229, 120 S.Ct. 1346; see also Walker, 576 U.S. at 207, 135 S.Ct. 2239. But, on one hand there is persuasion, and on the other there is coercion and significant encouragement—two distinct means of satisfying the close nexus test. See Louisiana Div. Sons of Confederate Veterans v. City of Natchitoches, 821 F. App'x 317, 320 (5th Cir. 2020) (per curiam) ("Responding agreeably to a request and being all but forced by the coercive power of a governmental official are different categories of responses ..."). Where we draw that line, though, is the question before us today.

1.

We start with encouragement. To constitute "significant encouragement," there must be such a "close nexus" between the parties that the government is practically "responsible" for the challenged decision. Blum, 457 U.S. at 1004, 102 S.Ct. 2777 (emphasis in original). What, then, is a close nexus? We know that "the mere fact that a business is subject to state regulation" is not sufficient. Id. (alteration adopted) (citation omitted); Halleck, 139 S. Ct. at 1932 ("Put simply, being regulated by the State does not make one a state actor."). And, it is well established that the government's "[m]ere approval of or acquiescence in" a private party's actions is not enough either. Blum, 457 U.S. at 1004-05, 102 S.Ct. 2777. Instead, for encouragement, we find that the government must exercise some active, meaningful control over the private party's decision.

Take Blum v. Yaretsky . There, the Supreme Court found there was no state action because a decision to discharge a patient—even if it followed from the "requir[ed] completion of a form" under New York law—was made by private physicians, not the government. Id. at 1006-08, 102 S.Ct. 2777. The plaintiff argued that, by regulating and overseeing the facility, the government had "affirmatively command[ed]" the decision. Id. at 1005, 102 S.Ct. 2777. The Court was not convinced—it emphasized that "physicians, [] not the forms, make the decision" and they do so under "professional standards that are not established by the State." Id. Similarly, in Rendell-Baker v. Kohn the Court found that a private school—which the government funded and placed students at—was not engaged in state action because the conduct at issue, namely the decision to fire someone, "[was] not ... influenced by

any state regulation." 457 U.S. 830, 841, 102 S.Ct. 2764, 73 L.Ed.2d 418 (1982).

Compare that, though, to Roberts v. Louisiana Downs, Inc., 742 F.2d 221 (5th Cir. 1984). There, we held that a horseracing club's action was attributable to the state because the Louisiana government—through legal and informal supervision—was overly involved in the decision to deny a racer a stall. Id. at 224. "Something more [was] present [] than simply extensive regulation of an industry, or passive approval by a state regulatory entity of a decision by a regulated business." Id. at 228. Instead, the stalling decision was made partly by the "racing secretary," a legislatively created position accompanied by expansive supervision from on-site state officials who had the "power to override decisions" made by the club's management. Id. So, even though the secretary was plainly a "private employee" paid by the club, the state's extensive oversight—coupled with some level of authority on the part of the state—meant that the club's choice was not fully independent or made wholly subject to its own policies. Id. at 227-28. So, this case is on the opposite end of the state-involvement spectrum to Blum.

Per Blum and Roberts, then, significant encouragement requires "[s]omething more" than uninvolved oversight from the government. Id. at 228. After all, there must be a "close nexus" that renders the government practically "responsible" for the decision. Blum, 457 U.S. at 1004, 102 S.Ct. 2777. Taking that in context, we find that the clear throughline for encouragement in our caselaw is that there must be some exercise of active (not passive), meaningful (impactful enough to render them responsible) control on the part of the government over the private party's challenged decision. Whether that is (1) entanglement in a party's independent decision-making or (2) direct involvement in carrying out the decision itself, the government must encourage the decision to such a degree that we can fairly say it was the state's choice, not the private actor's. See id.; Roberts, 742 F.2d at 224; Rendell-Baker, 457 U.S. at 841, 102 S.Ct. 2764 (close nexus test is met if action is "compelled or [] influenced" by the state (emphasis added)); Frazier, 765 F.2d at 1286 (significant encouragement is met when "the state has had some affirmative role, albeit one of encouragement short of compulsion," in the decision). Take Howard Gault Co. v. Texas Rural Legal Aid, Inc., 848 F.2d 544 (5th Cir. 1988). There, a group of onion growers—by way of state picketing laws and local officials—shut down a workers' strike. Id. at 548-49. We concluded that the growers' "activity"—axing the strike—"while not compelled by the state, was so significantly encouraged, both overtly and covertly, that the choice must in law be deemed to be that of the state." Id. at 555 (alterations adopted) (citation and quotation marks omitted) (emphasis added). Specifically, "[i]t was the heavy participation of state and state officials," including local prosecutors and police officers, "that [brought] [the conduct] under color of state law." Id. In other words, the officials were directly involved in carrying out the challenged decision. That satisfied the requirement that, to encourage a decision, the government must exert some meaningful, active control over the private party's decision.

This differs from the "joint action" test that we have considered in other cases. Under that doctrine, a private party may be considered a state actor when it "operates as a 'willful participant in joint activity with the State or its agents.'" Brentwood Acad. v. Tenn. Secondary Sch. Athletic Ass'n, 531 U.S. 288, 296, 121 S.Ct. 924, 148 L.Ed.2d 807 (2001) (quoting Lugar v. Edmondson Oil Co., 457 U.S. 922, 941, 102 S.Ct. 2744, 73 L.Ed.2d 482 (1982)). The difference between the two lies primarily in the degree of the state's involvement.
Under the joint action test, the level of integration is very high—there must be "pervasive entwinement" between the parties. Id. at 298, 121 S.Ct. 924. That is integration to such a degree that "will support a conclusion that an ostensibly private organization ought to be charged with a public character." Id. at 302, 121 S.Ct. 924 (emphasis added) (finding state action by athletic association when public officials served on the association's board, public institutions provided most of the association's funding, and the association's employees received public benefits); see also Rendell-Baker, 457 U.S. at 842, 102 S.Ct. 2764 (requiring a "symbiotic relationship"); Frazier, 765 F.2d at 1288 & n.22 (explaining that although the joint action test involves the government playing a "meaningful role" in the private actor's decision, that role must be part of a "functionally symbiotic" relationship that is so extensive that "any act of the private entity will be fairly attributable to the state even if it cannot be shown that the government played a direct role in the particular action challenged." (emphases added)).
Under the close nexus test, however, the government is not deeply intertwined with the private actor as a whole. Instead, the state is involved in only one facet of the private actor's operations—its decision-making process regarding the challenged conduct. Roberts, 742 F.2d at 224; Howard Gault, 848 F.2d at 555. That is a much narrower level of integration. See Roberts, 742 F.2d at 228 ("We do not today hold that the state and Louisiana Downs are in such a relationship that all acts of the track constitute state action, nor that all acts of the racing secretary constitute state action," but instead that "[i]n the area of stalling, ... state regulation and involvement is so specific and so pervasive that [such] decisions may be considered to bear the imprimatur of the state."). Consequently, the showings required by a plaintiff differ. Under the joint action test, the plaintiff must prove substantial integration between the two entities in toto. For the close nexus test, the plaintiff instead must only show significant involvement from the state in the particular challenged action.
Still, there is admittedly some overlap between the tests. See Brentwood, 531 U.S. at 303, 121 S.Ct. 924 ("'Coercion' and 'encouragement' are like 'entwinement' in referring to kinds of facts that can justify characterizing an ostensibly private action as public instead. Facts that address any of these criteria are significant, but no one criterion must necessarily be applied. When, therefore, the relevant facts show pervasive entwinement to the point of largely overlapping identity, the implication of state action is not affected by pointing out that the facts might not loom large under a different test."). But, that is to be expected—these tests are not "mechanical[ly]" applied. Roberts, 742 F.2d at 224.

We note that although state-action caselaw seems to deal most often with § 1983 (i.e., the under-color-of-law prong) and the Fourteenth Amendment, there is no clear directive from the Supreme Court that any variation in the law or government at issue changes the state-action analysis. See Blum, 457 U.S. at 1004, 102 S.Ct. 2777. In fact, we have expressly rejected such ideas. See Miller v. Hartwood Apartments, Ltd., 689 F.2d 1239, 1243 (5th Cir. 1982) ("Although the Blum decision turned on § 1983, we find the determination of federal action to rest on the same general principles as determinations of state action."); Barnes, 861 F.2d at 1385 ("The analysis of state action under the Fourteenth Amendment and the analysis of action under color of state law may coincide for purposes of § 1983.").

Our reading of what encouragement means under the close nexus test tracks with other federal courts, too. For example, the Ninth Circuit reads the close nexus test to be satisfied when, through encouragement, the government "overwhelm[s] the private party['s]" choice in the matter, forcing it to "act in a certain way." O'Handley, 62 F.4th at 1158; Rawson v. Recovery Innovations, Inc., 975 F.3d 742, 751 (9th Cir. 2020) ("A finding that individual state actors or other state requirements literally 'overrode' a nominally private defendant's independent judgment might very well provide relevant information."). That analysis, much like meaningful control, asks whether a decision "was the result of [a party's] own

independent judgment." O'Handley, 62 F.4th at 1159.

2.

Next, we take coercion—a separate and distinct means of satisfying the close nexus test. Generally speaking, if the government compels the private party's decision, the result will be considered a state action. Blum, 457 U.S. at 1004, 102 S.Ct. 2777. So, what is coercion? We know that simply "being regulated by the State does not make one a state actor." Halleck, 139 S. Ct. at 1932. Coercion, too, must be something more. But, distinguishing coercion from persuasion is a more nuanced task than doing the same for encouragement. Encouragement is evidenced by an exercise of active, meaningful control, whether by entanglement in the party's decision-making process or direct involvement in carrying out the decision itself. Therefore, it may be more noticeable and, consequently, more distinguishable from persuasion. Coercion, on the other hand, may be more subtle. After all, the state may advocate—even forcefully—on behalf of its positions. Southworth, 529 U.S. at 229, 120 S.Ct. 1346.

Consider a Second Circuit case, National Rifle Ass'n v. Vullo, 49 F.4th 700 (2d Cir. 2022). There, a New York state official "urged" insurers and banks via strongly worded letters to drop the NRA as a client. Id. at 706. In those letters, the official alluded to reputational harms that the companies would suffer if they continued to support a group that has allegedly caused or encouraged "devastation" and "tragedies" across the country. Id. at 709. Also, the official personally told a few of the companies in a closed-door meeting that she "was less interested in pursuing the [insurers' regulatory] infractions.... so long as [they] ceased" working with the NRA. Id. at 718. Ultimately, the Second Circuit found that both the letters and the statement did not amount to coercion, but instead "permissible government speech." Id. at 717, 719. In reaching that decision, the court emphasized that "[a]lthough she did have regulatory authority over the target audience," the official's letters were written in a "nonthreatening tone" and used persuasive, non-intimidating language. Id. at 717. Relatedly, while she referenced "adverse consequences" if the companies did not comply, they were only "reputational risks"—there was no intimation that "punishment or adverse regulatory action would follow the failure to accede to the request." Id. (alterations adopted). As for the "so long as" statement, the Second Circuit found that—when viewed in "context"—the official was merely "negotiating[] and resolving [legal] violations," a legitimate power of her office. Id. at 718-19. Because she was only "carrying out her regulatory responsibilities" and "engaging in legitimate enforcement action," the official's references to infractions were not coercive. Id. Thus, the Second Circuit found that seemingly threatening language was actually permissible government advocacy.

Apparently, the companies had previously issued "illegal insurance policies—programs created and endorsed by the NRA"—that covered litigation defense costs resulting from any firearm-related injury or death, in violation of New York law. Vullo, 49 F.4th at 718. The court reasoned that the official had the power to bring those issues to a close.

That is not to say that coercion is always difficult to identify. Sometimes, coercion is obvious. Take Bantam Books, Inc. v. Sullivan, 372 U.S. 58, 83 S.Ct. 631, 9 L.Ed.2d 584 (1963). There, the Rhode Island Commission to Encourage Morality—a state-created entity—sought to stop the distribution of obscene books to kids. Id. at 59, 83 S.Ct. 631. So, it sent a letter to a

book distributor with a list of verboten books and requested that they be taken off the shelves. Id. at 61-64, 83 S.Ct. 631. That request conveniently noted that compliance would "eliminate the necessity of our recommending prosecution to the Attorney General's department." Id. at 62 n.5, 83 S.Ct. 631. Per the Commission's request, police officers followed up to make sure the books were removed. Id. at 68, 83 S.Ct. 631. The Court concluded that this "system of informal censorship," which was "clearly [meant] to intimidate" the recipients through "threat of [] legal sanctions and other means of coercion" rendered the distributors' decision to remove the books a state action. Id. at 64, 67, 71-72, 83 S.Ct. 631. Given Bantam Books, not-so subtle asks accompanied by a "system" of pressure (e.g., threats and follow-ups) are clearly coercive.

Still, it is rare that coercion is so black and white. More often, the facts are complex and sprawling as was the case in Vullo. That means it can be quite difficult to parse out coercion from persuasion. We, of course, are not the first to recognize this. In that vein, the Second Circuit has crafted a four-factor test that distills the considerations of Bantam Books into a workable standard. We, lacking such a device, adopt the Second Circuit's approach as a helpful, non-exclusive tool for completing the task before us, namely identifying when the state's messages cross into impermissible coercion.

The Second Circuit starts with the premise that a government message is coercive—as opposed to persuasive—if it "can reasonably be interpreted as intimating that some form of punishment or adverse regulatory action will follow the failure to accede to the official's request." Vullo, 49 F.4th at 715 (quotation marks and citation omitted). To distinguish such "attempts to coerce" from "attempts to convince," courts look to four factors, namely (1) the speaker's "word choice and tone"; (2) "whether the speech was perceived as a threat"; (3) "the existence of regulatory authority"; and, "perhaps most importantly, (4) whether the speech refers to adverse consequences." Id. (citations omitted). Still, "[n]o one factor is dispositive." Id. (citing Bantam Books, 372 U.S. at 67, 83 S.Ct. 631). For example, the Second Circuit found in Vullo that the state officials' communications were not coercive because, in part, they were not phrased in an intimidating manner and only referenced reputational harms—an otherwise acceptable consequence for a governmental actor to threaten. Id. at 717, 719.

The Ninth Circuit has also adopted the four-factor approach and, in doing so, has cogently spelled out the nuances of each factor. Consider Kennedy v. Warren, 66 F.4th 1199 (9th Cir. 2023). There, Senator Elizabeth Warren penned a letter to Amazon asking it to stop selling a "false or misleading" book on COVID. Id. at 1204. The senator stressed that, by selling the book, Amazon was "providing consumers with false and misleading information" and, in doing so, was pursuing what she described as "an unethical, unacceptable, and potentially unlawful course of action." Id. So, she asked it to do better, including by providing a "public report" on the effects of its related sales algorithms and a "plan to modify these algorithms so that they no longer" push products peddling "COVID-19 misinformation." Id. at 1205. The authors sued, but the Ninth Circuit found no state action.

The court, lamenting that it can "be difficult to distinguish" between persuasion and coercion, turned to the Second Circuit's "useful non-exclusive" four-factor test. Id. at 1207. First, the court reasoned that the senator's letter, although made up

of "strong rhetoric," was framed merely as a "request rather than a command." Id. at 1208. Considering both the text and the "tenor" of the parties' relationship, the court concluded that the letter was not unrelenting, nor did it "suggest[] that compliance was the only realistic option." Id. at 1208-09.

Second, and relatedly, even if she had said as much, the senator lacked regulatory authority—she "ha[d] no unilateral power to penalize Amazon." Id. at 1210. Still, the sum of the second prong is more than just power. Given that the overarching purpose of the four-factor test is to ask if the speaker's message can "reasonably be construed" as a "threat of adverse consequences," the lack of power is "certainly relevant." Id. at 1209-10. After all, the "absence of authority influences how a reasonable person would read" an official's message. Id. at 1210; see also Hammerhead Enters., Inc. v. Brezenoff, 707 F.2d 33, 39 (2d Cir. 1983) (finding no government coercion where city official lacked "the power to impose sanctions on merchants who did not respond to [his] requests") (citing Bantam Books, 372 U.S. at 71, 83 S.Ct. 631). For example, in Warren, it would have been "unreasonable" to believe, given Senator Warren's position "as a single Senator" who was "removed from the relevant levers of power," that she could exercise any authority over Amazon. 66 F.4th at 1210.

Still, the "lack of direct authority" is not entirely dispositive. Id. Because—per the Second and Ninth Circuits—the key question is whether a message can "reasonably be construed as coercive," id. at 1209, a speaker's power over the recipient need not be clearly defined or readily apparent, so long as it can be reasonably said that there is some tangible power lurking in the background. See Okwedy v. Molinari, 333 F.3d 339, 344 (2d Cir. 2003) (finding a private party "could reasonably have believed" it would face retaliation if it ignored a borough president's request because "[e]ven though [he] lacked direct regulatory control," there was an "implicit threat" that he would "use whatever authority he does have ... to interfere" with the party's cashflow). That, of course, was not present in Warren. So, the second prong was easily resolved against state action.

According to the Ninth Circuit, that tracks with its precedent. "[I]n Carlin Communications, Inc. v. Mountain States Telephone & Telegraph Co., 827 F.2d 1291 (9th Cir. 1987), [they] held that a deputy county attorney violated the First Amendment by threatening to prosecute a telephone company if it continued to carry a salacious dial-a-message service." Warren, 66 F.4th at 1207. But, "in American Family Association, Inc. v. City & County of San Francisco, 277 F.3d 1114 (9th Cir. 2002), [they] held that San Francisco officials did not violate the First Amendment when they criticized religious groups' anti-gay advertisements and urged television stations not to broadcast the ads." Id. The rub, per the court, was that "public officials may criticize practices that they would have no constitutional ability to regulate, so long as there is no actual or threatened imposition of government power or sanction." Id.

Third, the senator's letter "contain[ed] no explicit reference" to "adverse consequences." 66 F.4th at 1211. And, beyond that, no "threat [was] clear from the context." Id. To be sure, an "official does not

The Ninth Circuit emphasized that officials may advocate for positions, including by "[g]enerating public pressure to motivate others to change their behavior." Warren, 66 F.4th at 1208. In that vein, it dismissed any references to "potential legal liability" because those statements do not necessarily "morph an effort to persuade into an attempt to coerce." Id. at 1209 (citing VDARE Found. v. City of Colo. Springs, 11 F.4th 1151, 1165 (10th Cir. 2021)). Instead, there must be "clear allegation[s] of legal violations or threat[s] of specific enforcement actions." Id.

need to say 'or else,'" but there must be some message—even if "unspoken"—that can be reasonably construed as intimating a threat. Id. at 1211-12. There, when read "holistically," the senator only implied that Amazon was "morally complicit" in bad behavior, nothing more. Id. at 1212.

Fourth, there was no indication that Amazon perceived the message as a threat. There was "no evidence" it "changed its algorithms"—"let alone that it felt compelled to do so"—as a result of the senator's urgings. Id. at 1211. Admittedly, it is not required that the recipient "bow[] to government pressure," but courts are more likely to find coercion if there is "some indication" that the message was "understood" as a threat, such as evidence of actual change. Id. at 1210-11. In Warren, it was apparent (and there was no sense to the contrary) that the minor policy change the company did make stemmed from reputational concerns, not "fears of liability in a court of law." Id. at 1211. Considering the above, the court found that the senator's message amounted to an attempt at persuasion, not coercion.

3.

To sum up, under the close nexus test, a private party's conduct may be state action if the government coerced or significantly encouraged it. Blum, 457 U.S. at 1004, 102 S.Ct. 2777. Although this test is not mechanical, see Roberts, 742 F.2d at 224 (noting that state action is "essentially [a] factual determination" made by "sifting facts and weighing circumstances case by case to determine if there is a sufficient nexus between the state and the particular aspect of the private individual's conduct which is complained of" (citation and quotation marks omitted)), there are clear, although not exclusive, ways to satisfy either prong.

For encouragement, we read the law to require that a governmental actor exercise active, meaningful control over the private party's decision in order to constitute a state action. That reveals itself in (1) entanglement in a party's independent decision-making or (2) direct involvement in carrying out the decision itself. Compare Roberts, 742 F.2d at 224 (state had such "continuous and intimate involvement" and supervision over horseracing decision that, when coupled with its authority over the actor, it was considered a state action) and Howard Gault, 848 F.2d at 555 (state eagerly, and effectively, assisted a private party in shutting down a protest), with Blum, 457 U.S. at 1008, 102 S.Ct. 2777 (state did not sufficiently influence the decision as it was made subject to independent standards). In any of those scenarios, the state has such a "close nexus" with the private party that the government actor is practically "responsible" for the decision, Blum, 457 U.S. at 1004, 102 S.Ct. 2777, because it has necessarily encouraged the private party to act and, in turn, commandeered its independent judgment, O'Handley, 62 F.4th at 1158-59.

For coercion, we ask if the government compelled the decision by, through threats or otherwise, intimating that some form of punishment will follow a failure to comply. Vullo, 49 F.4th at 715. Sometimes, that is obvious from the facts. See, e.g., Bantam Books, 372 U.S. at 62-63, 83 S.Ct. 631 (a mafiosi-style threat of referral to the Attorney General accompanied with persistent pressure and follow-ups). But, more often, it is not. So, to help distinguish permissible persuasion from impermissible coercion, we turn to the Second (and Ninth) Circuit's four-factor test. Again, honing in on whether the government "intimat[ed] that some form of punishment" will follow a "failure to accede," we parse the speaker's messages to assess the (1) word choice and tone, including the overall

"tenor" of the parties' relationship; (2) the recipient's perception; (3) the presence of authority, which includes whether it is reasonable to fear retaliation; and (4) whether the speaker refers to adverse consequences. Vullo, 49 F.4th at 715; see also Warren, 66 F.4th at 1207.

Each factor, though, has important considerations to keep in mind. For word choice and tone, "[a]n interaction will tend to be more threatening if the official refuses to take 'no' for an answer and pesters the recipient until it succumbs." Warren, 66 F.4th at 1209 (citing Bantam Books, 372 U.S. at 62-63, 83 S.Ct. 631). That is so because we consider the overall "tenor" of the parties' relationship. Id. For authority, there is coercion even if the speaker lacks present ability to act so long as it can "reasonably be construed" as a threat worth heeding. Compare id. at 1210 (single senator had no worthwhile power over recipient, practical or otherwise), with Okwedy, 333 F.3d at 344 (although local official lacked direct power over the recipient, company "could reasonably have believed" from the letter that there was "an implicit threat" and that he "would use whatever authority he does have" against it).

As for perception, it is not necessary that the recipient "admit that it bowed to government pressure," nor is it even "necessary for the recipient to have complied with the official's request"—"a credible threat may violate the First Amendment even if 'the victim ignores it, and the threatener folds his tent.'" Warren, 66 F.4th at 1210 (quoting Backpage.com, LLC v. Dart, 807 F.3d 229, 231 (7th Cir. 2015)). Still, a message is more likely to be coercive if there is some indication that the party's decision resulted from the threat. Id. at 1210-11. Finally, as for adverse consequences, the government need not speak its threat aloud if, given the circumstances, it is fair to say that the message intimates some form of punishment. Id. at 1209. If these factors weigh in favor of finding the government's message coercive, the coercion test is met, and the private party's resulting decision is a state action.

B.

With that in mind, we turn to the case at hand. We start with "the specific conduct of which the plaintiff complains." Am. Mfrs., 526 U.S. at 51, 119 S.Ct. 977. Here, that is "censor[ing] disfavored speakers and viewpoints" on social media. The Plaintiffs allege that the "Defendants [] coerced, threatened, and pressured social-media platforms"—via "threats of adverse government action" like increased regulation, antitrust enforcement, and changes to Section 230—to make those censorship decisions. That campaign, per the Plaintiffs, was multi-faceted—the officials "publicly threaten[ed] [the] companies" while they privately piled on "unrelenting pressure" via "demands for greater censorship." And they succeeded—the platforms censored disfavored content.

The officials do not deny that they worked alongside the platforms. Instead, they argue that their conduct—asking or trying to persuade the platforms to act—was permissible government speech. So, we are left with the task of sifting out any coercion and significant encouragement from their attempts at persuasion. Here, there were multiple speakers and messages. Taking that in context, we apply the law to one set of officials at a time, starting with the White House and Office of the Surgeon General.

1.

We find that the White House, acting in concert with the Surgeon General's

office, likely (1) coerced the platforms to make their moderation decisions by way of intimidating messages and threats of adverse consequences, and (2) significantly encouraged the platforms' decisions by commandeering their decision-making processes, both in violation of the First Amendment.

Generally speaking, officials from the White House and the Surgeon General's office had extensive, organized communications with platforms. They met regularly, traded information and reports, and worked together on a wide range of efforts. That working relationship was, at times, sweeping. Still, those facts alone likely are not problematic from a First-Amendment perspective. But, the relationship between the officials and the platforms went beyond that. In their communications with the platforms, the officials went beyond advocating for policies, Southworth, 529 U.S. at 229, 120 S.Ct. 1346, or making no-strings-attached requests to moderate content, Warren, 66 F.4th at 1209. Their interaction was "something more." Roberts, 742 F.2d at 228.

We start with coercion. On multiple occasions, the officials coerced the platforms into direct action via urgent, uncompromising demands to moderate content. Privately, the officials were not shy in their requests—they asked the platforms to remove posts "ASAP" and accounts "immediately," and to "slow[] down" or "demote[]" content. In doing so, the officials were persistent and angry. Cf. Bantam Books, 372 U.S. at 62-63, 83 S.Ct. 631. When the platforms did not comply, officials followed up by asking why posts were "still up," stating (1) "how does something like [this] happen," (2) "what good is" flagging if it did not result in content moderation, (3) "I don't know why you guys can't figure this out," and (4) "you are hiding the ball," while demanding "assurances" that posts were being taken down. And, more importantly, the officials threatened—both expressly and implicitly—to retaliate against inaction. Officials threw out the prospect of legal reforms and enforcement actions while subtly insinuating it would be in the platforms' best interests to comply. As one official put it, "removing bad information" is "one of the easy, low-bar things you guys [can] do to make people like me"—that is, White House officials—"think you're taking action."

That alone may be enough for us to find coercion. Like in Bantam Books, the officials here set about to force the platforms to remove metaphorical books from their shelves. It is uncontested that, between the White House and the Surgeon General's office, government officials asked the platforms to remove undesirable posts and users from their platforms, sent follow-up messages of condemnation when they did not, and publicly called on the platforms to act. When the officials' demands were not met, the platforms received promises of legal regime changes, enforcement actions, and other unspoken threats. That was likely coercive. See Warren, 66 F.4th at 1211-12.

That being said, even though coercion may have been readily apparent here, we find it fitting to consult the Second Circuit's four-factor test for distinguishing coercion from persuasion. In asking whether the officials' messages can "reasonably be construed" as threats of adverse consequences, we look to (1) the officials' word choice and tone; (2) the recipient's perception; (3) the presence of authority; and (4) whether the speaker refers to adverse consequences. Vullo, 49 F.4th at 715; see also Warren, 66 F.4th at 1207.

First, the officials' demeanor. We find, like the district court, that the officials' communications—reading them in "context,

not in isolation"—were on-the-whole intimidating. Warren, 66 F.4th at 1208. In private messages, the officials demanded "assurances" from the platforms that they were moderating content in compliance with the officials' requests, and used foreboding, inflammatory, and hyper-critical phraseology when they seemingly did not, like "you are hiding the ball," you are not "trying to solve the problem," and we are "gravely concerned" that you are "one of the top drivers of vaccine hesitancy." In public, they said that the platforms were irresponsible, let "misinformation [] poison" America, were "literally costing ... lives," and were "killing people." While officials are entitled to "express their views and rally support for their positions," the "word choice and tone" applied here reveals something more than mere requests. Id. at 1207-08.

Like Bantam Books—and unlike the requests in Warren—many of the officials' asks were "phrased virtually as orders," 372 U.S. at 68, 83 S.Ct. 631, like requests to remove content "ASAP" or "immediately." The threatening "tone" of the officials' commands, as well as of their "overall interaction" with the platforms, is made all the more evident when we consider the persistent nature of their messages. Generally speaking, "[a]n interaction will tend to be more threatening if the official refuses to take 'no' for an answer and pesters the recipient until it succumbs." Warren, 66 F.4th at 1209 (citing Bantam Books, 372 U.S. at 62-63, 83 S.Ct. 631). Urgency can have the same effect. See Backpage.com, 807 F.3d at 237 (finding the "urgency" of a sheriff's letter, including a follow-up, "imposed another layer of coercion due to its strong suggestion that the companies could not simply ignore" the sheriff), cert. denied, Dart v. Backpage.com, LLC, 580 U.S. 816, 137 S. Ct. 46, 196 L.Ed.2d 28 (2016). Here, the officials' correspondences were both persistent and urgent. They sent repeated follow-up emails, whether to ask why a post or account was "still up" despite being flagged or to probe deeper into the platforms' internal policies. On the latter point, for example, one official asked at least twelve times for detailed information on Facebook's moderation practices and activities. Admittedly, many of the officials' communications are not by themselves coercive. But, we do not take a speaker's communications "in isolation." Warren, 66 F.4th at 1208. Instead, we look to the "tenor" of the parties' relationship and the conduct of the government in context. Id. at 1209. Given their treatment of the platforms as a whole, we find the officials' tone and demeanor was coercive, not merely persuasive.

Second, we ask how the platforms perceived the communications. Notably, "a credible threat may violate the First Amendment even if 'the victim ignores it, and the threatener folds his tent.'" Id. at 1210 (quoting Backpage.com, 807 F.3d at 231). Still, it is more likely to be coercive if there is some evidence that the recipient's subsequent conduct is linked to the official's message. For example, in Warren, the Ninth Circuit court concluded that Amazon's decision to stop advertising a specific book was "more likely ... a response to widespread concerns about the spread of COVID-19," as there was "no evidence that the company changed [course] in response to Senator Warren's letter." Id. at 1211. Here, there is plenty of evidence—both direct and circumstantial, considering the platforms' contemporaneous actions—that the platforms were influenced by the officials' demands. When officials asked for content to be removed, the platforms took it down. And, when they asked for the platforms to be more aggressive, "interven[e]" more often, take quicker actions, and modify their "internal policies,"

the platforms did—and they sent emails and assurances confirming as much. For example, as was common after public critiques, one platform assured the officials they were "committed to addressing the [] misinformation that you've called on us to address" after the White House issued a public statement. Another time, one company promised to make an employee "available on a regular basis" so that the platform could "automatically prioritize" the officials' requests after criticism of the platform's response time. Yet another time, a platform said it was going to "adjust [its] policies" to include "specific recommendations for improvement" from the officials, and emailed as much because they "want[ed] to make sure to keep you informed of our work on each" change. Those are just a few of many examples of the platforms changing—and acknowledging as much—their course as a direct result of the officials' messages.

Third, we turn to whether the speaker has "authority over the recipient." 66 F.4th at 1210. Here, that is clearly the case. As an initial matter, the White House wields significant power in this Nation's constitutional landscape. It enforces the laws of our country, U.S. Const. art. II, and—as the head of the executive branch—directs an army of federal agencies that create, modify, and enforce federal regulations. We can hardly say that, like the senator in Warren, the White House is "removed from the relevant levers of power." 66 F.4th at 1210. At the very least, as agents of the executive branch, the officials' powers track somewhere closer to those of the commission in Bantam Books—they were legislatively given the power to "investigate violations[] and recommend prosecutions." Id. (citing Bantam Books, 372 U.S. at 66, 83 S.Ct. 631).

But, authority over the recipient does not have to be a clearly-defined ability to act under the close nexus test. Instead, a generalized, non-descript means to punish the recipient may suffice depending on the circumstances. As the Ninth Circuit explained in Warren, a message may be "inherently coercive" if, for example, it was conveyed by a "law enforcement officer" or "penned by an executive official with unilateral power." Id. (emphasis added). In other words, a speaker's power may stem from an inherent authority over the recipient. See, e.g., Backpage.com, 807 F.3d 229. That reasoning is likely applicable here, too, given the officials' executive status.

It is not even necessary that an official have direct power over the recipient. Even if the officials "lack[ed] direct authority" over the platforms, the cloak of authority may still satisfy the authority prong. See Warren, 66 F.4th at 1210. After all, we ask whether a "reasonable person" would be threatened by an official's statements. Id. Take, for example, Okwedy. There, a borough president penned a letter to a company—which, per the official, owned a "number of billboards on Staten Island and derive[d] substantial economic benefits from them"—and "call[ed] on [them] as a responsible member of the business community to please contact" his "legal counsel." 333 F.3d at 342. The Second Circuit found that, even though the official "lacked direct regulatory authority" or control over the company, an "implicit threat" flowed from his letter because he had some innate authority to affect the company. Id. at 344. The Second Circuit noted that "[a]lthough the existence of regulatory or other direct decisionmaking authority is certainly relevant to the question of whether a government official's comments were unconstitutionally threatening or coercive, a defendant without such direct regulatory or decisionmaking authority can also exert an impermissible type or degree of pressure." Id. at 343. Consider another example, Backpage.com. There, a sheriff sent a cease-and-desist letter to credit card companies—which he admittedly "had no authority to take any official action" against—to stop doing business with a website. 807 F.3d at 230, 236. "[E]ven if the companies understood the jurisdictional constraints on [the sheriff]'s ability to proceed against them directly," the sheriff's letter was still coercive because, among other reasons, it "invok[ed] the legal obligations of [the recipients] to cooperate with law enforcement," and the sheriff could easily "refer the credit card companies to the appropriate authority to investigate" their dealings, much like a White House official could contact the Department of Justice. Id. at 236-37.

This was true even though the financial institutions were large, sophisticated, and presumably understood the federal authorities were unlikely to prosecute the companies. Backpage.com, 807 F.3d at 234. As the Seventh Circuit explained, it was still in the credit card companies' financial interests to comply. Backpage's measly $135 million in annual revenue was a drop in the bucket of the financial service companies' combined net revenue of $22 billion. Id. at 236. Unlike credit card processors that at least made money servicing Backpage, social-media platforms typically depend on advertisers, not their users, for revenue. Cf. Wash. Post v. McManus, 944 F.3d 506, 516 (4th Cir. 2019) (holding campaign finance regulations on online ads unconstitutional where they "ma[de] it financially irrational, generally speaking, for platforms to carry political speech when other, more profitable options are available").

True, the government can "appeal[]" to a private party's "interest in avoiding liability" so long as that reference is not meant to intimidate or compel. Id. at 237; see also Vullo, 49 F.4th at 717-19 (statements were non-coercive because they referenced legitimate use of powers in a nonthreatening manner). But here, the officials' demands that the platforms remove content and change their practices were backed by the officials' unilateral power to act or, at the very least, their ability to inflict "some form of punishment" against the platforms. Okwedy, 333 F.3d at 342 (citation omitted) (emphasis added). Therefore, the authority factor weighs in favor of finding the officials' messages coercive.

Or, as the Ninth Circuit put it, "public officials may criticize practices that they would have no constitutional ability to regulate, so long as there is no actual or threatened imposition of government power or sanction." Warren, 66 F.4th at 1207 (citation omitted) (emphasis added).

Finally, and "perhaps most important[ly]," we ask whether the speaker "refers to adverse consequences that will follow if the recipient does not accede to the request." Warren, 66 F.4th at 1211 (citing Vullo, 49 F.4th at 715). Explicit and subtle threats both work—"an official does not need to say 'or else' if a threat is clear from the context." Id. (citing Backpage.com, 807 F.3d at 234). Again, this factor is met.

Here, the officials made express threats and, at the very least, leaned into the inherent authority of the President's office. The officials made inflammatory accusations, such as saying that the platforms were "poison[ing]" the public, and "killing people." The platforms were told they needed to take greater responsibility and action. Then, they followed their statements with threats of "fundamental reforms" like regulatory changes and increased enforcement actions that would ensure the platforms were "held accountable." But, beyond express threats, there was always an "unspoken 'or else.'" Warren, 66 F.4th at 1212. After all, as the executive of the Nation, the President wields awesome power. The officials were

not shy to allude to that understanding native to every American—when the platforms faltered, the officials warned them that they were "[i]nternally ... considering our options on what to do," their "concern[s] [were] shared at the highest (and I mean highest) levels of the [White House]," and the "President has long been concerned about the power of large social media platforms." Unlike the letter in Warren, the language deployed in the officials' campaign reveals clear "plan[s] to punish" the platforms if they did not surrender. Warren, 66 F.4th at 1209. Compare id., with Backpage.com, 807 F.3d at 237. Consequently, the four-factor test weighs heavily in favor of finding the officials' messages were coercive, not persuasive.

Notably, the Ninth Circuit recently reviewed a case that is strikingly similar to ours. In O'Handley, officials from the California Secretary of State's office allegedly "act[ed] in concert" with Twitter to censor speech on the platform. 62 F.4th at 1153. Specifically, the parties had a "collaborative relationship" where officials flagged tweets and Twitter "almost invariably" took them down. Id. Therefore, the plaintiff contended, when his election-fraud-based post was removed, California "abridged his freedom of speech" because it had "pressured Twitter to remove disfavored content." Id. at 1163. But, the Ninth Circuit disagreed, finding the close nexus test was not satisfied. The court reasoned that there was no clear indication that Twitter "would suffer adverse consequences if it refused" to comply with California's request. Id. at 1158. Instead, it was a "purely optional," "no strings attached" request. Id. Consequently, "Twitter complied with the request under the terms of its own content-moderation policy and using its own independent judgment." Id. To the Ninth Circuit, there was no indication—whether via tone, content, or otherwise—that the state would retaliate against inaction given the insubstantial relationship. Ultimately, the officials conduct was "far from the type of coercion" seen in cases like Bantam Books. Id. In contrast, here, the officials made clear that the platforms would suffer adverse consequences if they failed to comply, through express or implied threats, and thus the requests were not optional.

The Ninth Circuit insightfully noted the difficult task of applying the coercion test in the First Amendment context:
[W]e have drawn a sharp distinction between attempts to convince and attempts to coerce. Particularly relevant here, we have held that government officials do not violate the First Amendment when they request that a private intermediary not carry a third party's speech so long as the officials do not threaten adverse consequences if the intermediary refuses to comply. This distinction tracks core First Amendment principles. A private party can find the government's stated reasons for making a request persuasive, just as it can be moved by any other speaker's message. The First Amendment does not interfere with this communication so long as the intermediary is free to disagree with the government and to make its own independent judgment about whether to comply with the government's request.
O'Handley, 62 F.4th at 1158. After all, consistent with their constitutional and statutory authority, state "[a]gencies are permitted to communicate in a non-threatening manner with the entities they oversee without creating a constitutional violation." Id. at 1163 (citing Vullo, 49 F.4th at 714-19).

Given all of the above, we are left only with the conclusion that the officials' statements were coercive. That conclusion tracks with the decisions of other courts. After reviewing the four-factor test, it is apparent that the officials' messages could "reasonably be construed" as threats. Warren, 66 F.4th at 1208; Vullo, 49 F.4th at 716. Here, unlike in Warren, the officials' "call[s] to action"—given the context

and officials' tone, the presence of some authority, the platforms' yielding responses, and the officials' express and implied references to adverse consequences—"directly suggest[ed] that compliance was the only realistic option to avoid government sanction." 66 F.4th at 1208. And, unlike O'Handley, the officials were not simply flagging posts with "no strings attached," 62 F.4th at 1158—they did much, much more.

Now, we turn to encouragement. We find that the officials also significantly encouraged the platforms to moderate content by exercising active, meaningful control over those decisions. Specifically, the officials entangled themselves in the platforms' decision-making processes, namely their moderation policies. See Blum, 457 U.S. at 1008, 102 S.Ct. 2777. That active, meaningful control is evidenced plainly by a view of the record. The officials had consistent and consequential interaction with the platforms and constantly monitored their moderation activities. In doing so, they repeatedly communicated their concerns, thoughts, and desires to the platforms. The platforms responded with cooperation—they invited the officials to meetings, roundups, and policy discussions. And, more importantly, they complied with the officials' requests, including making changes to their policies.

The officials began with simple enough asks of the platforms—"can you share more about your framework here" or "do you have data on the actual number" of removed posts? But, the tenor later changed. When the platforms' policies were not performing to the officials' liking, they pressed for more, persistently asking what "interventions" were being taken, "how much content [was] being demoted," and why certain posts were not being removed. Eventually, the officials pressed for outright change to the platforms' moderation policies. They did so privately and publicly. One official emailed a list of proposed changes and said, "this is circulating around the building and informing thinking." The White House Press Secretary called on the platforms to adopt "proposed changes" that would create a more "robust enforcement strategy." And the Surgeon General published an advisory calling on the platforms to "[e]valuate the effectiveness of [their] internal policies" and implement changes. Beyond that, they relentlessly asked the platforms to remove content, even giving reasons as to why such content should be taken down. They also followed up to ensure compliance and, when met with a response, asked how the internal decision was made.

And, the officials' campaign succeeded. The platforms, in capitulation to state-sponsored pressure, changed their moderation policies. The platforms explicitly recognized that. For example, one platform told the White House it was "making a number of changes"—which aligned with the officials' demands—as it knew its "position on [misinformation] continues to be a particular concern" for the White House. The platform noted that, in line with the officials' requests, it would "make sure that these additional [changes] show results—the stronger demotions in particular should deliver real impact." Similarly, one platform emailed a list of "commitments" after a meeting with the White House which included policy "changes" "focused on reducing the virality" of anti-vaccine content even when it "does not contain actionable misinformation." Relatedly, one platform told the Surgeon General that it was "committed to addressing the [] misinformation that you've called on us to address," including by implementing a set of jointly proposed policy changes from the White House and the Surgeon General. Consequently, it is apparent that the officials exercised meaningful control—via changes to the platforms' independent processes—over the platforms' moderation decisions. By pushing changes to the platforms' policies through their expansive relationship with and informal oversight over the platforms, the officials imparted a lasting influence on the platforms' moderation decisions without the need for any further input. In doing so, the officials ensured that any moderation decisions were not made in accordance with independent judgments guided by independent standards. See id.; see also Am. Mfrs., 526 U.S. at 52, 119 S.Ct. 977 ("The decision to withhold payment, like the decision to transfer Medicaid patients to a lower level of care in Blum, is made by concededly private parties, and 'turns on ... judgments made by private parties' without 'standards ... established by the State.'"). Instead, they were encouraged by the officials' imposed standards.

In sum, we find that the White House officials, in conjunction with the Surgeon General's office, coerced and significantly encouraged the platforms to moderate content. As a result, the platforms' actions "must in law be deemed to be that of the State." Blum, 457 U.S. at 1004, 102 S.Ct. 2777.

2.

Next, we consider the FBI. We find that the FBI, too, likely (1) coerced the platforms into moderating content, and (2) encouraged them to do so by effecting changes to their moderation policies, both in violation of the First Amendment.

We start with coercion. Similar to the White House, Surgeon General, and CDC officials, the FBI regularly met with the platforms, shared "strategic information," frequently alerted the social media companies to misinformation spreading on their platforms, and monitored their content moderation policies. But, the FBI went beyond that—they urged the platforms to take down content. Turning to the Second Circuit's four-factor test, we find that those requests were coercive. Vullo, 49 F.4th at 715.

First, given the record before us, we cannot say that the FBI's messages were plainly threatening in tone or manner. Id. But, second, we do find the FBI's requests came with the backing of clear authority over the platforms. After all, content moderation requests "might be inherently coercive if sent by ... [a] law enforcement officer." Warren, 66 F.4th at 1210 (citations omitted); see also Zieper v. Metzinger, 392 F. Supp. 2d 516, 531 (S.D.N.Y. 2005) (holding that a reasonable jury could find an FBI agent's request coercive when he asked an internet service provider to take down a controversial video that could be "inciting a riot" because he was "an FBI agent charged with investigating the video"); Backpage, 807 F.3d at 234 ("[C]redit card companies don't like being threatened by a law-enforcement official that he will sic the feds on them, even if the threat may be empty."). This is especially true of the lead law enforcement, investigatory, and domestic security agency for the executive branch. Consequently, because the FBI wielded some authority over the platforms, see Okwedy, 333 F.3d at 344, the FBI's takedown requests can "reasonably be construed" as coercive in nature, Warren, 66 F.4th at 1210.

Third, although the FBI's communications did not plainly reference adverse consequences, an actor need not express a threat aloud so long as, given the circumstances, the message intimates that some form of punishment will follow noncompliance. Id. at 1209. Here, beyond its inherent authority, the FBI—unlike most federal actors—also has tools at its disposal to

force a platform to take down content. For instance, in Zieper, an FBI agent asked a web-hosting platform to take down a video portraying an imaginary documentary showing preparations for a military take-over of Times Square on the eve of the new millennium. 392 F. Supp. 2d at 520-21. In appealing to the platform, the FBI agent said that he was concerned that the video could be "inciting a riot" and testified that he was trying to appeal to the platform's "'good citizenship' by pointing out a public safety concern." Id. at 531. And these appeals to the platform's "good citizenship" worked—the platform took down the video. Id. at 519. The Southern District of New York concluded that a reasonable jury could find that statement coercive, "particularly when said by an FBI agent charged with investigating the video." Id. at 531. Indeed, the question is whether a message intimates that some form of punishment that may be used against the recipient, an analysis that includes means of retaliation that are not readily apparent. See Warren, 66 F.4th at 1210.

Fourth, the platforms clearly perceived the FBI's messages as threats. For example, right before the 2022 congressional election, the FBI warned the platforms of "hack and dump" operations from "state-sponsored actors" that would spread misinformation through their sites. In doing so, the FBI officials leaned into their inherent authority. So, the platforms reacted as expected—by taking down content, including posts and accounts that originated from the United States, in direct compliance with the request. Considering the above, we conclude that the FBI coerced the platforms into moderating content. But, the FBI's endeavors did not stop there.

We also find that the FBI likely significantly encouraged the platforms to moderate content by entangling itself in the platforms' decision-making processes. Blum, 457 U.S. at 1008, 102 S.Ct. 2777. Beyond taking down posts, the platforms also changed their terms of service in concert with recommendations from the FBI. For example, several platforms "adjusted" their moderation policies to capture "hack-and-leak" content after the FBI asked them to do so (and followed up on that request). Consequently, when the platforms subsequently moderated content that violated their newly modified terms of service (e.g., the results of hack-and-leaks), they did not do so via independent standards. See Blum, 457 U.S. at 1008, 102 S.Ct. 2777. Instead, those decisions were made subject to commandeered moderation policies.

In short, when the platforms acted, they did so in response to the FBI's inherent authority and based on internal policies influenced by FBI officials. Taking those facts together, we find the platforms' decisions were significantly encouraged and coerced by the FBI.

Plaintiffs and several amici assert that the FBI and other federal actors coerced or significantly encouraged the social-media companies into disseminating information that was favorable to the administration—information the federal officials knew was false or misleading. We express no opinion on those assertions because they are not necessary to our holding here.

3.

Next, we turn to the CDC. We find that, although not plainly coercive, the CDC officials likely significantly encouraged the platforms' moderation decisions, meaning they violated the First Amendment.

We start with coercion. Here, like the other officials, the CDC regularly met with

the platforms and frequently flagged content for removal. But, unlike the others, the CDC's requests for removal were not coercive—they did not ask the platforms in an intimidating or threatening manner, do not possess any clear authority over the platforms, and did not allude to any adverse consequences. Consequently, we cannot say the platforms' moderation decisions were coerced by CDC officials.

The same, however, cannot be said for significant encouragement. Ultimately, the CDC was entangled in the platforms' decision-making processes, Blum, 457 U.S. at 1008, 102 S.Ct. 2777.

The CDC's relationship with the platforms began by defining—in "Be On the Lookout" meetings—what was (and was not) "misinformation" for the platforms. Specifically, CDC officials issued "advisories" to the platforms warning them about misinformation "hot topics" to be wary of. From there, CDC officials instructed the platforms to label disfavored posts with "contextual information," and asked for "amplification" of approved content. That led to CDC officials becoming intimately involved in the various platforms' day-to-day moderation decisions. For example, they communicated about how a platform's "moderation team" reached a certain decision, how it was "approach[ing] adding labels" to particular content, and how it was deploying manpower. Consequently, the CDC garnered an extensive relationship with the platforms.

From that relationship, the CDC, through authoritative guidance, directed changes to the platforms' moderation policies. At first, the platforms asked CDC officials to decide whether certain claims were misinformation. In response, CDC officials told the platforms whether such claims were true or false, and whether information was "misleading" or needed to be addressed via CDC-backed labels. That back-and-forth then led to "[s]omething more." Roberts, 742 F.2d at 228.

Specifically, CDC officials directly impacted the platforms' moderation policies. For example, in meetings with the CDC, the platforms actively sought to "get into [] policy stuff" and run their moderation policies by the CDC to determine whether the platforms' standards were "in the right place." Ultimately, the platforms came to heavily rely on the CDC. They adopted rule changes meant to implement the CDC's guidance. As one platform said, they "were able to make [changes to the 'misinfo policies'] based on the conversation [they] had last week with the CDC," and they "immediately updated [their] policies globally" following another meeting. And, those adoptions led the platforms to make moderation decisions based entirely on the CDC's say-so—"[t]here are several claims that we will be able to remove as soon as the CDC debunks them; until then, we are unable to remove them." That dependence, at times, was total. For example, one platform asked the CDC how it should approach certain content and even asked the CDC to double check and proofread its proposed labels.

Viewing these facts, we are left with no choice but to conclude that the CDC significantly encouraged the platforms' moderation decisions. Unlike in Blum, the platforms' decisions were not made by independent standards, 457 U.S. at 1008, 102 S.Ct. 2777, but instead were marred by modification from CDC officials. Thus, the resulting content moderation, "while not compelled by the state, was so significantly encouraged, both overtly and covertly" by CDC officials that those decisions "must in law be deemed to be that of the state." Howard Gault, 848 F.2d at 555 (alterations adopted) (internal quotation marks and citation omitted). 4.

Next, we examine CISA. We find that, for many of the same reasons as the FBI and the CDC, CISA also likely violated the First Amendment. First, CISA was the "primary facilitator" of the FBI's interactions with the social-media platforms and worked in close coordination with the FBI to push the platforms to change their moderation policies to cover "hack-and-leak" content. Second, CISA's "switchboarding" operations, which, in theory, involved CISA merely relaying flagged social-media posts from state and local election officials to the platforms, was, in reality, "[s]omething more." Roberts, 742 F.2d at 228. CISA used its frequent interactions with social-media platforms to push them to adopt more restrictive policies on censoring election-related speech. And CISA officials affirmatively told the platforms whether the content they had "switch-boarded" was true or false. Thus, when the platforms acted to censor CISA-switchboarded content, they did not do so independently. Rather, the platforms' censorship decisions were made under policies that CISA has pressured them into adopting and based on CISA's determination of the veracity of the flagged information. Thus, CISA likely significantly encouraged the platforms' content-moderation decisions and thereby violated the First Amendment. See Blum, 457 U.S. at 1008, 102 S.Ct. 2777; Howard Gault, 848 F.2d at 555.

5.

Finally, we address the remaining officials—the NIAID and the State Department. Having reviewed the record, we find the district court erred in enjoining these other officials. Put simply, there was not, at this stage, sufficient evidence to find that it was likely these groups coerced or significantly encouragement the platforms.

For the NIAID officials, it is not apparent that they ever communicated with the social-media platforms. Instead, the record shows, at most, that public statements by Director Anthony Fauci and other NIAID officials promoted the government's scientific and policy views and attempted to discredit opposing ones—quintessential examples of government speech that do not run afoul of the First Amendment. See Pleasant Grove City v. Summum, 555 U.S. 460, 467-68, 129 S.Ct. 1125, 172 L.Ed.2d 853 (2009) ("[The government] is entitled to say what it wishes, and to select the views that it wants to express." (quotation marks and citations omitted)); Nat'l Endowment for Arts v. Finley, 524 U.S. 569, 598, 118 S.Ct. 2168, 141 L.Ed.2d 500 (1998) (Scalia, J., concurring) ("It is the very business of government to favor and disfavor points of view...."). Consequently, with only insignificant (if any) communication (direct or indirect) with the platforms, we cannot say that the NIAID officials likely coerced or encouraged the platforms to act.

As for the State Department, while it did communicate directly with the platforms, so far there is no evidence these communications went beyond educating the platforms on "tools and techniques" used by foreign actors. There is no indication that State Department officials flagged specific content for censorship, suggested policy changes to the platforms, or engaged in any similar actions that would reasonably bring their conduct within the scope of the First Amendment's prohibitions. After all, their messages do not appear coercive in tone, did not refer to adverse consequences, and were not backed by any apparent authority. And, per this record, those officials were not involved to any meaningful extent with the

platforms' moderation decisions or standards.

* * *

Ultimately, we find the district court did not err in determining that several officials—namely the White House, the Surgeon General, the CDC, the FBI, and CISA—likely coerced or significantly encouraged social-media platforms to moderate content, rendering those decisions state actions. In doing so, the officials likely violated the First Amendment.

Here, in holding that some of the officials likely coerced or sufficiently encouraged the platforms to censor content, we pass no judgment on any joint actor or conspiracy-based state action theory.

"With very limited exceptions, none applicable to this case, censorship—'an effort by administrative methods to prevent the dissemination of ideas or opinions thought dangerous or offensive,' as distinct from punishing such dissemination (if it falls into one of the categories of punishable speech, such as defamation or threats) after it has occurred—is prohibited by the First Amendment as it has been understood by the courts." Backpage.com, 807 F.3d at 235 (citation omitted).

But, we emphasize the limited reach of our decision today. We do not uphold the injunction against all the officials named in the complaint. Indeed, many of those officials were permissibly exercising government speech, "carrying out [their] responsibilities," or merely "engaging in [a] legitimate [] action." Vullo, 49 F.4th at 718-19. That distinction is important because the state-action doctrine is vitally important to our Nation's operation—by distinguishing between the state and the People, it promotes "a robust sphere of individual liberty." Halleck, 139 S. Ct. at 1928. That is why the Supreme Court has been reluctant to expand the scope of the doctrine. See Matal v. Tam, 582 U.S. 218, 235, 137 S.Ct. 1744, 198 L.Ed.2d 366 (2017) ("[W]e must exercise great caution before extending our government-speech precedents."). If just any relationship with the government "sufficed to transform a private entity into a state actor, a large swath of private entities in America would suddenly be turned into state actors and be subject to a variety of constitutional constraints on their activities." Halleck, 139 S. Ct. at 1932. So, we do not take our decision today lightly. But, the Supreme Court has rarely been faced with a coordinated campaign of this magnitude orchestrated by federal officials that jeopardized a fundamental aspect of American life. Therefore, the district court was correct in its assessment—"unrelenting pressure" from certain government officials likely "had the intended result of suppressing millions of protected free speech postings by American citizens." We see no error or abuse of discretion in that finding.

Our holding today, as is appropriate under the state-action doctrine, is limited. Like in Roberts, we narrowly construe today's finding of state action to apply only to the challenged decisions. See 742 F.2d at 228 ("We do not doubt that many of the actions of the racetrack and its employees are no more than private business decisions," but "[i]n the area of stalling, [] state regulation and involvement is so specific and so pervasive that [such] decisions may be considered to bear the imprimatur of the state.").

V.

Next, we address the equities. Plaintiffs seeking a preliminary injunction must show that irreparable injury is "likely" absent an injunction, the balance of the equities weighs in their favor, and an injunction is in the public interest. Winter v. Nat. Res. Def. Council, Inc., 555 U.S. 7, 22, 129 S.Ct. 365, 172 L.Ed.2d 249 (2008) (collecting cases).

While "[t]he loss of First Amendment freedoms, for even minimal periods

of time, unquestionably constitutes irreparable injury," Roman Cath. Diocese of Brooklyn v. Cuomo, — U.S. —, 141 S. Ct. 63, 67, 208 L.Ed.2d 206 (2020) (per curiam) (quoting Elrod v. Burns, 427 U.S. 347, 373, 96 S.Ct. 2673, 49 L.Ed.2d 547 (1976) (plurality opinion)), "invocation of the First Amendment cannot substitute for the presence of an imminent, non-speculative irreparable injury," Google, Inc. v. Hood, 822 F.3d 212, 228 (5th Cir. 2016).

Here, the district court found that the Plaintiffs submitted enough evidence to show that irreparable injury is likely to occur during the pendency of the litigation. In so doing, the district court rejected the officials' arguments that the challenged conduct had ceased and that future harm was speculative, drawing on mootness and standing doctrines. Applying the standard for mootness, the district court concluded that a defendant must show that "it is absolutely clear the alleged wrongful behavior could not reasonably be expected to recur" and that the officials had failed to make such showing here. In assessing whether Plaintiffs' claims of future harm were speculative and dependent on the actions of social-media companies, the district court applied a quasi-standing analysis and found that the Plaintiffs had alleged a "substantial risk" of future harm that is not "imaginary or wholly speculative," pointing to the officials' ongoing coordination with social-media companies and willingness to suppress free speech on a myriad of hot-button issues.

We agree that the Plaintiffs have shown that they are likely to suffer an irreparable injury. Deprivation of First Amendment rights, even for a short period, is sufficient to establish irreparable injury. Elrod, 427 U.S. at 373, 96 S.Ct. 2673; Cuomo, 141 S. Ct. at 67; Opulent Life Church v. City of Holly Springs, 697 F.3d 279, 295 (5th Cir. 2012).

The district court was right to be skeptical of the officials' claims that they had stopped all challenged conduct. Cf. Speech First, Inc. v. Fenves, 979 F.3d 319, 328 (5th Cir. 2020) ("[A] defendant's voluntary cessation of a challenged practice does not deprive a federal court of its power to determine the legality of the practice, even in cases in which injunctive relief is sought."). But, the district court's use of a "not imaginary or speculative" standard in the irreparable harm context is inconsistent with binding case law. See Winter, 555 U.S. at 22, 129 S.Ct. 365 ("Issuing a preliminary injunction based only on a possibility of irreparable harm is inconsistent with our characterization of injunctive relief as an extraordinary remedy that may only be awarded upon a clear showing that the plaintiff is entitled to such relief." (citation omitted) (emphasis added)). The correct standard is whether a future injury is "likely." Id. But, because the Plaintiffs sufficiently demonstrated that their First Amendment interests are either threatened or impaired, they have met this standard. See Opulent Life Church, 697 F.3d at 295 (citing 11A Charles Alan Wright et al., Federal Practice and Procedure § 2948.1 (2d ed. 1995) ("When an alleged deprivation of a constitutional right is involved, most courts hold that no further showing of irreparable injury is necessary.")). Indeed, the record shows, and counsel confirmed at oral argument, that the officials' challenged conduct has not stopped.

Next, we turn to whether the balance of the equities warrants an injunction and whether such relief is in the public interest. Where the government is the opposing party, harm to the opposing party and the public interest "merge." Nken v. Holder, 556 U.S. 418, 435, 129 S.Ct. 1749, 173 L.Ed.2d 550 (2009).

The district court concluded that the equities weighed in favor of granting the

injunction because the injunction maintains the "constitutional structure" and Plaintiffs' free speech rights. The officials argue that the district court gave short shrift to their assertions that the injunction could limit the Executive Branch's ability to "persuade" the American public, which raises separation-of-powers issues.

Although both Plaintiffs and the officials assert that their ability to speak is affected by the injunction, the government is not permitted to use the government-speech doctrine to "silence or muffle the expression of disfavored viewpoints." Matal, 582 U.S. at 235, 137 S.Ct. 1744.

It is true that the officials have an interest in engaging with social-media companies, including on issues such as misinformation and election interference. But the government is not permitted to advance these interests to the extent that it engages in viewpoint suppression. Because "[i]njunctions protecting First Amendment freedoms are always in the public interest," the equities weigh in Plaintiffs' favor. Opulent Life Church, 697 F.3d at 298 (quotation marks and citations omitted).

While the officials raise legitimate concerns that the injunction could sweep in lawful speech, we have addressed those concerns by modifying the scope of the injunction.

VI.

Finally, we turn to the language of the injunction itself. An injunction "is overbroad if it is not 'narrowly tailored to remedy the specific action which gives rise to the order' as determined by the substantive law at issue." Scott v. Schedler, 826 F.3d 207, 211 (5th Cir. 2016) (alterations adopted) (quoting John Doe #1 v. Veneman, 380 F.3d 807, 818 (5th Cir. 2004)). This requirement that a "plaintiff's remedy must be tailored to redress the plaintiff's particular injury" is in recognition of a federal court's "constitutionally prescribed role ... to vindicate the individual rights of the people appearing before it," not "generalized partisan preferences." Gill v. Whitford, — U.S. —, 138 S. Ct. 1916, 1933-34, 201 L.Ed.2d 313 (2018).

In addition, injunctions cannot be vague. "Every order granting an injunction ... must: (A) state the reasons why it issued; (B) state its terms specifically; and (C) describe in reasonable detail—and not by referring to the complaint or other document—the act or acts restrained or required." FED. R. CIV. P. 65(d)(1). The Supreme Court has explained:

[T]he specificity provisions of Rule 65(d) are no mere technical requirements. The Rule was designed to prevent uncertainty and confusion on the part of those faced with injunctive orders, and to avoid the possible founding of a contempt citation on a decree too vague to be understood. Since an injunctive order prohibits conduct under threat of judicial punishment, basic fairness requires that those enjoined receive explicit notice of precisely what conduct is outlawed.

Schmidt v. Lessard, 414 U.S. 473, 476, 94 S.Ct. 713, 38 L.Ed.2d 661 (1974) (citations omitted).

To be sure, "[t]he specificity requirement is not unwieldy," Meyer v. Brown & Root Construction Co., 661 F.2d 369, 373 (5th Cir. 1981), and "elaborate detail is unnecessary," Islander E. Rental Program v. Barfield, 145 F.3d 359 (5th Cir. 1998). But still, "an ordinary person reading the court's order should be able to ascertain from the document itself exactly what conduct is proscribed." Louisiana v. Biden, 45 F.4th at 846 (citation omitted). The preliminary injunction here is both vague and broader than necessary to remedy the Plaintiffs' injuries, as shown at this preliminary juncture. As an initial matter, it is axiomatic that an injunction is overbroad if it enjoins a defendant from engaging in legal conduct. Nine of the preliminary injunction's ten prohibitions risk doing just that. Moreover, many of the provisions are duplicative of each other and thus unnecessary.

Prohibitions one, two, three, four, five, and seven prohibit the officials from engaging in, essentially, any action "for the purpose of urging, encouraging, pressuring, or inducing" content moderation. But "urging, encouraging, pressuring" or even "inducing" action does not violate the Constitution unless and until such conduct crosses the line into coercion or significant encouragement. Compare Walker, 576 U.S. at 208, 135 S.Ct. 2239 ("[A]s a general matter, when the government speaks it is entitled to promote a program, to espouse a policy, or to take a position."), Finley, 524 U.S. at 598, 118 S.Ct. 2168 (Scalia, J., concurring in judgment) ("It is the very business of government to favor and disfavor points of view...."), and Vullo, 49 F.4th at 717 (holding statements "encouraging" companies to evaluate risk of doing business with the plaintiff did not violate the Constitution where the statements did not "intimate that some form of punishment or adverse regulatory action would follow the failure to accede to the request"), with Blum, 457 U.S. at 1004, 102 S.Ct. 2777, and O'Handley, 62 F.4th at 1158 ("In deciding whether the government may urge a private party to remove (or refrain from engaging in) protected speech, we have drawn a sharp distinction between attempts to convince and attempts to coerce."). These provisions also tend to overlap with each other, barring various actions that may cross the line into coercion. There is no need to try to spell out every activity that the government could possibly engage in that may run afoul of the Plaintiffs' First Amendment rights as long the unlawful conduct is prohibited.

The eighth, ninth, and tenth provisions likewise may be unnecessary to ensure Plaintiffs' relief. A government actor generally does not violate the First Amendment by simply "following up with social-media companies" about content-moderation, "requesting content reports from social-media companies" concerning their content-moderation, or asking social media companies to "Be on The Lookout" for certain posts. Plaintiffs have not carried their burden to show that these activities must be enjoined to afford Plaintiffs full relief.

While these activities, standing alone, are not violative of the First Amendment and therefore must be removed from the preliminary injunction, we note that these activities may violate the First Amendment when they are part of a larger scheme of government coercion or significant encouragement, and neither our opinion nor the modified injunction should be read to hold otherwise.

These provisions are vague as well. There would be no way for a federal official to know exactly when his or her actions cross the line from permissibly communicating with a social-media company to impermissibly "urging, encouraging, pressuring, or inducing" them "in any way." See Scott, 826 F.3d at 209, 213 ("[a]n injunction should not contain broad generalities"); Islander East, 145 F.3d 359 (finding injunction against "interfering in any way" too vague). Nor does the injunction define "Be on The Lookout" or "BOLO." That, too, renders it vague. See Louisiana v. Biden, 45 F.4th at 846 (holding injunction prohibiting the federal government

from "implementing the Pause of new oil and natural gas leases on public lands or in offshore waters as set forth in [the challenged Executive Order]" was vague because the injunction did not define the term "Pause" and the parties had each proffered different yet reasonable interpretations of the Pause's breadth).

While helpful to some extent, the injunction's carveouts do not solve its clarity and scope problems. Although they seem to greenlight legal speech, the carveouts, too, include vague terms and appear to authorize activities that the injunction otherwise prohibits on its face. For instance, it is not clear whether the Surgeon General could publicly urge social media companies to ensure that cigarette ads do not target children. While such a statement could meet the injunction's exception for "exercising permissible public government speech promoting government policy or views on matters of public concern," it also "urg[es] ... in any manner[] social-media companies to change their guidelines for removing, deleting, suppressing, or reducing content containing protected speech." This example illustrates both the injunction's overbreadth, as such public statements constitute lawful speech, see Walker, 576 U.S. at 208, 135 S.Ct. 2239, and vagueness, because the government-speech exception is ill-defined, see Scott, 826 F.3d at 209, 213 (vacating injunction requiring the Louisiana Secretary of State to maintain in force his "policies, procedures, and directives" related to the enforcement of the National Voter Registration Act, where "policies, procedures, and directives" were not defined). At the same time, given the legal framework at play, these carveouts are likely duplicative and, as a result, unnecessary.

Finally, the fifth prohibition—which bars the officials from "collaborating, coordinating, partnering, switchboarding, and/or jointly working with the Election Integrity Partnership, the Virality Project, the Stanford Internet Observatory, or any like project or group" to engage in the same activities the officials are proscribed from doing on their own—may implicate private, third-party actors that are not parties in this case and that may be entitled to their own First Amendment protections. Because the provision fails to identify the specific parties that are subject to the prohibitions, see Scott, 826 F.3d at 209, 213, and "exceeds the scope of the parties' presentation," OCA-Greater Houston v. Texas, 867 F.3d 604, 616 (5th Cir. 2017), Plaintiffs have not shown that the inclusion of these third parties is necessary to remedy their injury. So, this provision cannot stand at this juncture. See also Alexander v. United States, 509 U.S. 544, 550, 113 S.Ct. 2766, 125 L.Ed.2d 441 (1993) ("[C]ourt orders that actually [] forbid speech activities are classic examples of prior restraints."). For the same reasons, the injunction's application to "all acting in concert with [the officials]" is overbroad.

We therefore VACATE prohibitions one, two, three, four, five, seven, eight, nine, and ten of the injunction.

That leaves provision six, which bars the officials from "threatening, pressuring, or coercing social-media companies in any manner to remove, delete, suppress, or reduce posted content of postings containing protected free speech." But, those terms could also capture otherwise legal speech. So, the injunction's language must be further tailored to exclusively target illegal conduct and provide the officials with additional guidance or instruction on what behavior is prohibited. To be sure, our standard practice is to remand to the district court to tailor such a provision in the first instance. See Scott, 826 F.3d at 214. But this is far from a standard case. In light of the expedited nature of this

appeal, we modify the injunction's remaining provision ourselves.

In doing so, we look to the Seventh Circuit's approach in Backpage.com, 807 F.3d at 239. There, the Seventh Circuit held that a county sheriff violated Backpage's First Amendment rights by demanding that financial service companies cut ties with Backpage in an effort to "crush" the platform (an online forum for "adult" classified ads). Id. at 230. To remedy the constitutional violation, the court issued the following injunction:

Sheriff Dart, his office, and all employees, agents, or others who are acting or have acted for or on behalf of him, shall take no actions, formal or informal, to coerce or threaten credit card companies, processors, financial institutions, or other third parties with sanctions intended to ban credit card or other financial services from being provided to Backpage.com.

Id. at 239.

Like the Seventh Circuit's preliminary injunction in Backpage.com, we endeavor to modify the preliminary injunction here to target the coercive government behavior with sufficient clarity to provide the officials notice of what activities are proscribed. Specifically, prohibition six of the injunction is MODIFIED to state:

Defendants, and their employees and agents, shall take no actions, formal or informal, directly or indirectly, to coerce or significantly encourage social-media companies to remove, delete, suppress, or reduce, including through altering their algorithms, posted social-media content containing protected free speech. That includes, but is not limited to, compelling the platforms to act, such as by intimating that some form of punishment will follow a failure to comply with any request, or supervising, directing, or otherwise meaningfully controlling the social-media companies' decision-making processes.

Under the modified injunction, the enjoined Defendants cannot coerce or significantly encourage a platform's content-moderation decisions. Such conduct includes threats of adverse consequences—even if those threats are not verbalized and never materialize—so long as a reasonable person would construe a government's message as alluding to some form of punishment. That, of course, is informed by context (e.g., persistent pressure, perceived or actual ability to make good on a threat). The government cannot subject the platforms to legal, regulatory, or economic consequences (beyond reputational harms) if they do not comply with a given request. See Bantam Books, 372 U.S. at 68, 83 S.Ct. 631; Okwedy, 333 F.3d at 344. The enjoined Defendants also cannot supervise a platform's content moderation decisions or directly involve themselves in the decision itself. Social-media platforms' content-moderation decisions must be theirs and theirs alone. See Blum, 457 U.S. at 1008, 102 S.Ct. 2777. This approach captures illicit conduct, regardless of its form.

Because the modified injunction does not proscribe Defendants from activities that could include legal conduct, no carveouts are needed. There are two guiding inquiries for Defendants. First, is whether their action could be reasonably interpreted as a threat to take, or cause to be taken, an official action against the social-media companies if the companies decline Defendants' request to remove, delete, suppress, or reduce protected free speech on their platforms. Second, is whether Defendants have exercised active, meaningful control over the platforms' content-moderation decisions to such a degree that it inhibits the platforms' independent decision-making. To be sure, this modified injunction still "restricts government communications not specifically targeted to particular content posted by plaintiffs themselves," as the officials protest. But that does not mean it is still overbroad. To the contrary, an injunction "is not necessarily made overbroad by extending benefit or protection to persons other than prevailing parties in the lawsuit—even if it is not a class action—if such breadth is necessary to give prevailing parties the relief to which they are entitled." Pro. Ass'n of Coll. Educators, TSTA/NEA v. El Paso Cnty. Cmty. Coll. Dist., 730 F.2d 258, 274 (5th Cir. 1984) (citations omitted); see also Bresgal v. Brock, 843 F.2d 1163, 1170-71 (9th Cir. 1987). Such breadth is plainly necessary, if not inevitable, here. The officials have engaged in a broad pressure campaign designed to coerce social-media companies into suppressing speakers, viewpoints, and content disfavored by the government. The harms that radiate from such conduct extend far beyond just the Plaintiffs; it impacts every social-media user. Naturally, then, an injunction against such conduct will afford protections that extend beyond just Plaintiffs, too. Cf. Feds for Med. Freedom v. Biden, 63 F.4th 366, 387 (5th Cir. 2023) ("[A]n injunction [can] benefit non-parties as long as that benefit [is] merely incidental." (internal quotation marks and citation omitted)).

As explained in Part IV above, the district court erred in finding that the NIAID Officials and State Department Officials likely violated Plaintiffs' First Amendment rights. So, we exclude those parties from the injunction. Accordingly, the term "Defendants" as used in this modified provision is defined to mean only the following entities and officials included in the original injunction:

The following members of the Executive Office of the President of the United States: White House Press Secretary, Karine Jean-Pierre; Counsel to the President, Stuart F. Delery; White House Partnerships Manager, Aisha Shah; Special Assistant to the President, Sarah Beran; Administrator of the United States Digital Service within the Office of Management and Budget, Mina Hsiang; White House National Climate Advisor, Ali Zaidi; White House Senior COVID-19 Advisor, formerly Andrew Slavitt; Deputy Assistant to the President and Director of Digital Strategy, formerly Rob Flaherty; White House COVID-19 Director of Strategic Communications and Engagement, Dori Salcido; White House Digital Director for the COVID-19 Response Team, formerly Clarke Humphrey; Deputy Director of Strategic Communications and Engagement of the White House COVID-19 Response Team, formerly Benjamin Wakana; Deputy Director for Strategic Communications and External Engagement for the White House COVID-19 Response Team, formerly Subhan Cheema; White House COVID-19 Supply Coordinator, formerly Timothy W. Manning; and Chief Medical Advisor to the President, Dr. Hugh Auchincloss, along with their directors, administrators and employees. Surgeon General Vivek H. Murthy; and Chief Engagement Officer for the Surgeon General, Katharine Dealy, along with their directors, administrators and employees. The Centers for Disease Control and Prevention ("CDC"), and specifically the following employees: Carol Y. Crawford, Chief of the Digital Media Branch of the CDC Division of Public Affairs; Jay Dempsey, Social-media Team Leader, Digital Media Branch, CDC Division of Public Affairs; and Kate Galatas, CDC Deputy Communications Director. The Federal

Bureau of Investigation ("FBI"), and specifically the following employees: Laura Dehmlow, Section Chief, FBI Foreign Influence Task Force; and Elvis M. Chan, Supervisory Special Agent of Squad CY-1 in the FBI San Francisco Division. And the Cybersecurity and Infrastructure Security Agency ("CISA"), and specifically the following employees: Jen Easterly, Director of CISA; Kim Wyman, Senior Cybersecurity Advisor and Senior Election Security Leader; and Lauren Protentis, Geoffrey Hale, Allison Snell, and Brian Scully.

VII.

The district court's judgment is AFFIRMED with respect to the White House, the Surgeon General, the CDC, the FBI, and CISA and REVERSED as to all other officials. The preliminary injunction is VACATED except for prohibition number six, which is MODIFIED as set forth herein. The preliminary injunction is STAYED for ten days following the date hereof. The Clerk is DIRECTED to issue the mandate forthwith.


Summaries of

State v. Biden

United States Court of Appeals, Fifth Circuit
Oct 3, 2023
83 F.4th 350 (5th Cir. 2023)

discussing state action requirement in the First Amendment context

Summary of this case from Ass'n of Am. Physicians & Surgeons Educ. Found. v. Am. Bd. of Internal Med.
Case details for

State v. Biden

Case Details

Full title:State of Missouri; State of Louisiana; Aaron Kheriaty; Martin Kulldorff…

Court:United States Court of Appeals, Fifth Circuit

Date published: Oct 3, 2023

Citations

83 F.4th 350 (5th Cir. 2023)

Citing Cases

The Daily Wire, LLC v. United States Dep't of State

Proj., Inc. v. Dep't of Treasury, 946 F.3d 649, 655 (5th Cir. 2019) (quoting Bennett v. Spear, 520 U.S. 154,…

Murthy v. Missouri

83 F. 4th 350, reversed and remanded.…