From Casetext: Smarter Legal Research

NetChoice v. Bonta

United States District Court, Northern District of California
Dec 31, 2024
5:24-cv-07885-EJD (N.D. Cal. Dec. 31, 2024)

Opinion

5:24-cv-07885-EJD

12-31-2024

NETCHOICE, Plaintiff, v. ROB BONTA, Defendant.


ORDER GRANTING IN PART AND DENYING IN PART MOTION FOR PRELIMINARY INJUNCTION RE: ECF NO. 2

EDWARD J. DAVILA UNITED STATES DISTRICT JUDGE

When Facebook first launched in 2004, the phrase “social media” had not yet made its way into most people's vocabularies. Much has changed since then. Today, one would be hard-pressed to find someone who has not heard of social media. Companies like Facebook (now known as Meta) have grown into multibillion-dollar juggernauts, and social media has rapidly grown to touch on nearly every aspect of modern life. With social media's growth, it is easier than ever to forge connections with others, whether that is with someone across the street or across the ocean. News can be shared almost instantly, not only by traditional media outlets but also by individuals seeking to contribute to public conversations. And officials, government agencies, and advocacy organizations alike now have a direct line to those they serve and those who support them. But such rapid growth comes with its share of challenges. It is easy to connect on social media, but in some circumstances, those connections may degrade the quality of our interactions with each other in ways that amplify rather than reduce loneliness. Just as news travels quickly, so does misinformation. And much like those working to better the world may leverage social media platforms to reach a wide audience, so can those seeking to do harm.

Social Media, Oxford English Dictionary, https://www.oed.com/dictionary/social-median (“social media” first published in 2012); Press Release, Merriam-Webster Inc., Something to Tweet About: Merriam-Webster's Collegiate Dictionary Updated for 2011 (Aug. 25, 2011), https://www.prnewswire.com/news-releases/something-to-tweet-about-128379408.html (noting the addition of “social media” into the Merriam-Webster dictionary).

U.S. Surgeon General, Our Epidemic of Loneliness and Isolation 20 (2023), https://www.hhs.gov/sites/default/files/surgeon-general-social-connection-advisory.pdf.

A robust debate has emerged about how to maximize social media's benefits while minimizing its potential harms. Social media companies have taken some voluntary steps to address possible harms while critics question whether those steps have gone too far or not far enough. In recent days, policymakers have focused on one particular concern: the potential for social media to harm children's health. Over the past two years, at least five states, including California, have passed laws seeking to protect children from social media's harmful effects. See NetChoice, LLC v. Bonta, 113 F.4th 1101 (9th Cir. 2024) (California); NetChoice, LLC v. Reyes, --- F.Supp.3d ----, 2024 WL 4135626 (D. Utah Sept. 10, 2024); NetChoice, LLC v. Fitch, --- F.Supp.3d ----, 2024 WL 3276409 (S.D.Miss. July 1, 2024); NetChoice, LLC v. Yost, 716 F.Supp. 3d 539 (S.D. Ohio 2024); NetChoice, LLC v. Griffin, 2023 WL 5660155 (W.D. Ark. Aug. 31, 2023). At the federal level as well, Congress has discussed legislation to protect kids on social media. See Kids Online Safety Act, S. 1409, 118th Cong. (2023).

This case arises from California's most recent efforts to address social media safety for children. On September 20, 2024, California's governor signed SB 976, also known as the Protecting Our Kids from Social Media Addiction Act, into law. A month and a half later, on November 12, 2024, Plaintiff NetChoice-an internet trade group with several social media companies among its members-sued Defendant Rob Bonta, in his official capacity as California Attorney General, to block SB 976. In doing so, NetChoice raised facial First Amendment and vagueness challenges to SB 976, in addition to as-applied challenges on behalf of five of its members. At the same time, NetChoice moved to enjoin enforcement of SB 976 ahead of its effective date on January 1, 2025. Since SB 976 was scheduled to take effect so soon after NetChoice filed its motion, the parties agreed to an expedited briefing schedule. The parties finished briefing NetChoice's motion on December 9, 2024, and the Court held a hearing on the matter eight days later, on December 17, 2024. Because NetChoice has shown that parts of SB 976 are likely to infringe upon the First Amendment, the Court GRANTS IN PART and DENIES IN PART NetChoice's preliminary injunction motion.

I. BACKGROUND

When the California Legislature enacted SB 976, it did so for the stated purpose of “ensur[ing] that social media platforms obtain parental consent before exposing children and adolescents to harmful and addictive social media features.” SB 976, § 1(g), ECF No. 2-1. According to the Legislature, this was necessary because of mounting evidence that linked minors' use of social media to negative health outcomes such as eating disorders, disrupted sleep patterns, depression, and self-harm, id. §§ 1(c)-(e), and because “some social media platforms [had] evolved to include addictive features” that could keep minors on those platforms for increasingly long hours. Id. § 1(b).

A. Coverage Definition

SB 976 regulates companies that offer certain types of personalized media feeds “as a significant part of the service provided by [those companies'] internet website, online service, online application, or mobile application.” Cal. Health & Safety Code § 27000.5(b)(1). Specifically, personalized feeds that recommend user-generated or user-shared content based on “information provided by the user, or otherwise associated with the user or the user's device,” trigger SB 976's provisions. Id. § 27000.5(a).

However, there are several exceptions to this coverage definition. The first group of exceptions is based on a personalized feed's features. A feed that would otherwise meet the coverage definition is excluded if that feed also acts in one or more of seven ways, such as by recommending content based on information “not persistently associated with the user or user's device,” based on privacy or accessibility settings, or based on a user's “express[] and unambiguous[]” request for a specific category of content. Id. §§ 27000.5(a)(1)-(7). The seven feed-based exceptions are varied, but they generally serve to exempt feeds from SB 976's provisions if those feeds recommend content based on a user's explicit requests and not based on data collected about that user's previous internet activity. See id. The second group of exceptions is based on the nature of a potentially covered company's business. Even if a company offers a feed that falls under SB 976's coverage definition, the bill's provisions do not apply if that company provides services “for which interactions between users are limited to commercial transactions or to consumer reviews” or if that company “operates a feed for the primary purpose of cloud storage.” Id. § 27000.5(b)(2).

B. Regulatory Requirements

At a high level, SB 976 imposes what some call an “age gate.” That is, SB 976 requires covered companies to gate off, or restrict, minors from accessing certain features unless those minors receive “verifiable parental consent.” Id. §§ 27001(a)(2), 27002(a)-(b). From January 1, 2025 to December 31, 2026, covered entities need only gate off features from users actually known to be minors. Id. §§ 27001(a)(1)(A), 27002(a)(2). But starting January 1, 2027, covered companies must undertake age assurance efforts to determine whether users are adults or minors; then, they must gate off restricted features from any user whom the company cannot reasonably determine to be an adult. Id. §§ 27001(a)(1)(B), 27002(a)(2). To implement this latter ageassurance requirement, SB 976 directs the California Attorney General to promulgate regulations on the issue by January 1, 2027. Id. § 27006(b).

SB 976 restricts two features behind an age gate. Minors can only access those features with parental consent. Id. §§ 27001(a)(2), 27002(a). First, SB 976 restricts access to personalized feeds that meet its coverage definition and do not satisfy any exception. Id. § 27001(a). Second, SB 976 prohibits push notifications (i.e., pop-ups from smart phone applications) from being sent at certain times of the day. Id. § 27002(a). Namely, SB 976 bars covered companies from sending notifications at night between 12 a.m. and 6 a.m. (year-round) and also during school hours between 8 a.m. and 3 p.m. (Mondays through Fridays from September through May). Id.

In addition to age gating the above two features, SB 976 requires covered companies to develop settings that parents can use to control their children's social media use. Some of those required settings overlap with the age gated features. Id. §§ 27002(b)(1)-(2), (4). Two, though, are distinct. The first gives parents the power to “[l]imit their child's ability to view the number of likes or other forms of feedback to pieces of media within an addictive feed,” and SB 976 mandates that this setting be turned on by default. Id. § 27002(b)(3). The second allows parents to “[s]et their child's account to private mode, in a manner in which only users to whom the child is connected on the [covered company's] service or application may view or respond to content posted by the child”; again, this setting must be turned on by default. Id. § 27002(b)(5).

Finally, SB 976 compels covered companies to make disclosures related to the requirements above. Under this last provision, covered companies must publicly disclose the following information every year: the number of minors who use their service; the number of minor users who received parental consent to access covered feeds; and the number of minor users for whom SB 976's default settings are or are not enabled. Id. § 27005.

II. LEGAL STANDARD

“A preliminary injunction is an extraordinary remedy never awarded as of right.” Winter v. Nat. Res. Def. Council, Inc., 555 U.S. 7, 24 (2008). To secure a preliminary injunction, a plaintiff must establish that it (1) “is likely to succeed on the merits,” (2) “is likely to suffer irreparable harm in the absence of preliminary relief,” (3) “the balance of equities tips in [its] favor,” and (4) “that an injunction is in the public interest.” Id. at 20.

Because NetChoice makes a First Amendment claim, the likelihood of success factor is subject to burden shifting. The plaintiff still bears the initial burden and must demonstrate “a colorable claim that its First Amendment rights have been infringed[] or are threatened with infringement.” Smith v. Helzer, 95 F.4th 1207, 1214 (9th Cir. 2024) (quoting Cal. Chamber of Com. v. Council for Educ. & Rsch. on Toxics, 29 F.4th 468, 478 (9th Cir. 2022)). Once the plaintiff makes that initial showing, the burden “shifts to the government to justify the restriction on speech.” Id. (quoting Cal. Chamber of Com., 29 F.4th at 478). If the government fails to satisfy that burden, the plaintiff prevails on likelihood of success. Put differently, the plaintiff has the initial burden of showing that a challenged law implicates the First Amendment while the government has the subsequent burden of showing that the challenged law satisfies the appropriate level of scrutiny. Doe v. Harris, 772 F.3d 563, 570 (9th Cir. 2014).

Further, likelihood of success “is especially important when a plaintiff alleges a constitutional violation and injury” like NetChoice does here. Baird v. Bonta, 81 F.4th 1036, 1040 (9th Cir. 2023). A finding that the plaintiff is likely to succeed on its First Amendment claim puts a strong (and perhaps insurmountable) thumb on the scale in favor of a preliminary injunction. Likelihood of success on a constitutional claim necessarily implies that the plaintiff would suffer irreparable harm absent an injunction. Klein v. City of San Clemente, 584 F.3d 1196, 1208 (9th Cir. 2009) (quoting Elrod v. Burns, 427 U.S. 347, 373 (1976) (“[T]he loss of First Amendment freedoms, for even minimal periods of time, unquestionably constitutes irreparable injury.”); Doe, 772 F.3d at 583 (quoting Associated Press v. Otter, 682 F.3d 821, 826 (9th Cir. 2012)) (same); see also Hernandez v. Sessions, 872 F.3d 976, 995 (9th Cir. 2017) (“Thus, it follows inexorably from our conclusion that the government's current policies are likely unconstitutional . . . that Plaintiffs have also carried their burden as to irreparable harm.”). And analysis of the remaining equity and public interest factors, which merge when the government is a defendant, Drakes Bay Oyster Co. v. Jewell, 747 F.3d 1073, 1092 (9th Cir. 2014), begins tilted in favor of a preliminary injunction since it is “always in the public interest to prevent the violation of a party's constitutional rights.” Riley's Am. Heritage Farms v. Elsasser, 32 F.4th 707, 731 (9th Cir. 2022) (quoting Melendres v. Arpaio, 695 F.3d 990, 1002 (9th Cir. 2012)).

Meanwhile, courts can deny a preliminary injunction without “‘consider[ing] the other [preliminary injunction] factors' if a movant fails to show a likelihood of success on the merits.” Baird, 81 F.4th at 1040 (quoting Disney Enters., Inc. v. VidAngel, Inc., 869 F.3d 848, 856 (9th Cir. 2017)); see also Smith, 95 F.4th at 1215 (affirming denial of a preliminary injunction after finding no likelihood of success and without discussing any other preliminary injunction factor).

III. DISCUSSION

A. Ripeness of Age Assurance Challenges

The Court begins by addressing the provisions of SB 976 that require covered companies to conduct age assurance (Cal. Health & Safety Code §§ 27001(a)(1)(B), 27002(a)(2)) because Defendant's argument regarding those provisions-that challenges to age assurance are not yet ripe-cuts across this entire case. The purpose of ripeness doctrine is “to prevent the courts, through avoidance of premature adjudication, from entangling themselves in abstract disagreements.” Nat'l Park Hosp. Ass'n v. Dep't of Interior, 538 U.S. 803, 807 (2003) (quoting Abbott Lab'ys v. Gardner, 387 U.S. 136, 148-49 (1967)). For First Amendment cases, courts give more leeway to plaintiffs faced with ripeness questions. Twitter, Inc. v. Paxton, 56 F.4th 1170, 1173-74 (9th Cir. 2022) (citation omitted). But that does not mean ripeness becomes toothless whenever the First Amendment is involved. See id. at 1172 (affirming dismissal of First Amendment challenge as unripe). In this instance, the Court finds that NetChoice's challenge to age assurance is not yet ripe because covered companies do not need to implement age assurance procedures until January 1, 2027, and the regulations governing age assurance have not yet been issued.

Ripeness has both constitutional and prudential dimensions. Nat'l Park Hops. Ass'n, 538 U.S. at 807 (citation omitted). NetChoice, as the party asserting a claim, has the burden of satisfying both dimensions. Colwell v. Dep't of Health & Hum. Servs., 558 F.3d 1112, 1121 (9th Cir. 2009).

1. Constitutional Ripeness

Although the parties do not directly address constitutional ripeness, the Court starts there because it has a duty to consider all aspects of ripeness. City & Cnty. of S.F. v. Garland, 42 F.4th 1078, 1084 (9th Cir. 2022). The constitutional component of ripeness is equivalent to the injuryin-fact prong of Article III standing. Ass'n of Irritated Residents v. EPA, 10 F.4th 937, 944 (9th Cir. 2021) (citation omitted). For a pre-enforcement challenge like the one NetChoice brings here, this requires examining “(1) whether the plaintiffs have articulated a concrete plan to violate the law in question, (2) whether the prosecuting authorities have communicated a specific warning or threat to initiate proceedings, and (3) the history of past prosecution or enforcement under the challenged statute.” Alaska Right to Life Pol. Action Comm. v. Feldman, 504 F.3d 840, 849 (9th Cir. 2007).

Looking to the first constitutional ripeness factor, the Court finds little doubt that NetChoice's members have a concrete plan to forego implementing age assurance-most of NetChoice's members do not have the capabilities to conduct age assurance, and they would like to avoid both the costs of establishing such capabilities as well as the potential drop in user growth that age assurance might trigger. Cleland Decl. ¶¶ 29, 32, ECF No. 2-2. Second, even though the record does not reflect that Defendant has made any specific enforcement threat, for First Amendment cases, the necessary “threat of enforcement may be inherent in the challenged statute.” Wolfson v. Brammer, 616 F.3d 1045, 1059 (9th Cir. 2010). This way, “a plaintiff need not risk prosecution in order to challenge” restrictions on protected speech. Id. at 1060. Finally, the third factor plays little role here because SB 976 is new, so there is no history of enforcement to examine. Id. But, California's participation in other lawsuits seeking to hold NetChoice members accountable for alleged harms to minors suggests that Defendant would be willing to enforce SB 976. E.g., California v. Meta Platforms, Inc., No. 4:23-cv-05448 (N.D. Cal.). Therefore, the Court finds that NetChoice has established constitutional ripeness.

NetChoice asserts associational standing, so its standing to sue derives from its members' independent standing to sue. See Hunt v. Wash. State Apple Advert. Comm'n, 432 U.S. 333, 343 (1977). Therefore, the Court analyzes injuries to NetChoice's members rather than to NetChoice itself.

2. Prudential Ripeness

However, NetChoice's challenge to age assurance falters at prudential ripeness. The prudential component of ripeness requires examining (1) “fitness of the issues for judicial decision” and (2) “the hardship to the parties of withholding court consideration.” Ass'n of Irritated Residents, 10 F.4th at 944 (quoting Abbot Lab'ys, 387 U.S. at 149). The fitness component reflects the “interest the judiciary has in delaying consideration of a case”-that is to say, the interest in waiting for further development that renders disputes more concrete and less hypothetical. Oklevueha Native Am. Church of Haw., Inc. v. Holder, 676 F.3d 829, 838 (9th Cir. 2012). The hardship component “serves as a counterbalance” to the judiciary's interests in awaiting further development. Id. If a plaintiff could suffer great hardship from delaying judicial review, courts should address that plaintiff's claim even if the record and the circumstances do not present a perfectly concrete issue.

a. Fitness for Review

The Court finds that the free speech issues surrounding age assurance are not yet fit for judicial review. Fitness turns on two factors: whether the challenged action is final and whether the issues raised “are primarily legal [and] do not require further factual development.” Wolfson, 616 F.3d at 1060 (quoting US W. Commc'ns v. MFS Intelenet, Inc., 193 F.3d 1112, 1118 (9th Cir. 1999)). Here, SB 976 is final, but age assurance raises extensive factual issues that still require more development.

Efforts to impose age assurance requirements on the internet are not new, and those previous efforts have been met with First Amendment scrutiny as well. The Supreme Court first addressed age assurance almost thirty years ago in Reno v. ACLU, 521 U.S. 844 (1997). In that case, the Supreme Court reviewed whether it was constitutional under the First Amendment for Congress to criminalize the transmission of offensive or indecent internet communications to minors through the Communications Decency Act (CDA). As relevant here, the Supreme Court considered whether the presence of an age assurance affirmative defense narrowly tailored the CDA to Congress's purpose. Id. at 881-82. The answer was no. Id. In reaching that conclusion, the Supreme Court relied on the district court's factual findings that age assurance technology at the time was neither effective nor economically feasible for most websites. Id. As such, websites would not be able to take advantage of the age assurance affirmative defense and would necessarily need to limit speech to adults in order to avoid criminal liability under the CDA. Id. at 882. In other words, the government “failed to prove that the proffered defense would significantly reduce the heavy burden on adult speech.” Id.

The Supreme Court returned to online age assurance in Ashcroft v. ACLU, 542 U.S. 656 (2004). Ashcroft dealt with Congress's second attempt at protecting children from sexually explicit materials on the internet after the Supreme Court struck down its first attempt in Reno. This time, Congress passed the Child Online Protection Act (COPA), which narrowed the scope of the content prohibited to children and broadened the age assurance affirmative defense. Id. at 661-62. Again, the Supreme Court considered whether age assurance helped to narrowly tailor COPA to Congress's purpose. And once again, the Supreme Court answered no. Id. at 666-70. Like it did in Reno, the Supreme Court reached this answer by carefully parsing the district court's findings of fact. The district court found that, based on the current state of age assurance methods, a less restrictive alternative-filtering technology-would be more effective than age assurance. Id. at 668. Importantly, the Supreme Court also remanded this case for additional factfinding because the record in front of it “[did] not reflect current technological reality.” Id. at 671. Further, Ashcroft expressly left open the possibility that, upon further factual development, the district court could conclude that age assurance was the least restrictive alternative for meeting Congress's purpose. Id. at 673. In doing so, the Supreme Court made clear that the viability of age assurance under the First Amendment is not an abstract question of law but rather requires a deep factual dive into how the implementation of age assurance using current technologies does or does not burden speech.

On remand, both the district and circuit courts heeded this message about the factual nature of the First Amendment analysis of age assurance. ACLU v. Mukasey, 534 F.3d 181, 195-97 (3d Cir. 2008). The circuit court eventually affirmed a permanent injunction against COPA in part because it agreed with the district court that the presence of the age assurance defense “[did] not aid in narrowly tailoring COPA.” Id. For a third time, the district court's factual findings were key. Even after updating the record, the district court found that no effective age assurance options existed at the time, that options requiring payment or personal information would deter adult users, and websites themselves would have to bear implementation costs disproportionate to age assurance's usefulness. Id.

The lesson from this line of cases is that a First Amendment analysis of age assurance requirements entails a careful evaluation of how those requirements burden speech. That type of evaluation is highly factual and depends on the current state of age assurance technology. For example, if certain speech is prohibited to minors but there is no effective age assurance mechanism, websites and other speakers might err on the side of denying the prohibited speech to both adults and children so that they can avoid liability. Or, if there are effective age assurance mechanisms but they are onerous, implementing those mechanisms might discourage adults from accessing speech that they are entitled to since those adults may not wish to go through the hassle of age assurance.

Of course, the fact that technology can rapidly change does not render an issue unfit for judicial review. Such rapid change is commonplace, and courts cannot avoid adjudicating technology issues simply because technology progresses quickly. Nor does the need for a robust factual record render an issue unfit for judicial review by itself. Most cases will require the parties to explore relevant factual circumstances. The key is whether the current circumstances are sufficiently concrete in ways that allow a court to determine the scope and effects of a challenged regulation. Alaska Right to Life, 504 F.3d at 849 (citation omitted). The reason this case is not fit for judicial review is because SB 976's requirements for age assurance are still in flux. SB 976 requires covered entities to “reasonably determine[]” whether a user is a minor starting January 1, 2027. Cal Health & Safety Code §§ 27001(a)(1)(B), 27002(a)(2). That requirement, however, is merely a “starting point” for determining what covered companies will have to do. United States v. Braren, 338 F.3d 971, 976 (9th Cir. 2003). SB 976 also requires Defendant to promulgate regulations implementing these age assurance provisions. Cal Health & Safety Code § 27006(b). There are many ways that those regulations can interpret what it means to make “reasonable” age assurance efforts. Perhaps the regulations might require covered entities to verify age through photo identification. Or alternatively, they may permit covered entities to use age estimation tools, such as by using computer vision to analyze facial features. Amicus Br. 11-12, ECF No. 31-1 (citing Noah Apthorpe et al., Online Age Gating: An Interdisciplinary Evaluation 21-22 (Aug. 1, 2024), https://papers.ssrn.com/sol3/papers.cfm?abstractid=4937328). The regulations may even allow covered entities to estimate age using tools that run in the background and require no user input. For example, many companies now collect extensive data about users' activity throughout the internet that allow them to develop comprehensive profiles of each user for targeted advertising. Id. at 12; see also, e.g., Egelman Decl. ¶¶ 20-21, ECF No. 18-1. The regulations implementing SB 976 could permit covered entities to use such advertising profiles or similar techniques to estimate age without needing a user to affirmatively submit information.

These differences matter. As NetChoice itself observes, the burden placed on covered companies to conduct age assurance varies “with the level of certainty with which companies must verify ages.” Cleland Decl. ¶ 29. So too will the burdens that age assurance places on adult access to speech. While adults might be reluctant or unable to provide photo identification, age assurance that runs in the background may discourage few if any adults from accessing speech. All this is important when assessing whether SB 976's age assurance requirement is properly tailored to achieve California's goal of protecting children. And none of this can be developed in the factual record until Defendant issues regulations on age assurance. Until then, both the parties and the Court would only be guessing at which age assurance tools they should focus on when weighing the burdens that those tools may have on speech. In sum, because Defendant still has significant leeway to clarify the scope of SB 976's age assurance provisions, the issue of age assurance is not yet fit for review. Cf. Pakdel v. City & Cnty. of S.F., 594 U.S. 474, 480 (2021) (“[F]ailure to properly pursue administrative procedures [in a takings case] may render a claim unripe if avenues still remain for the government to clarify or change its decision.”) (emphasis omitted).

NetChoice resists this conclusion by pointing to Brown v. Entertainment Merchants Association, 564 U.S. 786, 794 (2011), a case in which the Supreme Court struck down California's ban on selling violent video games to minors. Brown explained that minors have First Amendment rights to receive speech, and those rights are not conditional on parental consent. Id. at 795 n.3 (states do not have “the power to prevent children from hearing or saying anything without their parents' prior consent”) (emphasis in original). But NetChoice goes too far to the extent it argues that this means restricting speech by age is per se unconstitutional. Brown did not hold that California's violent video game ban was per se unconstitutional; instead, it applied strict scrutiny. Id. at 799-804. More importantly, the Supreme Court has expressly declined to hold that governments are “incapable of enacting any regulation of the Internet designed to prevent minors from gaining access to harmful materials.” Ashcroft, 542 U.S. at 672. Indeed, the Supreme Court has in the past upheld regulations that restricted speech to minors but not to adults. See, e.g., Ginsberg v. New York, 390 U.S. 629 (1968). At most, Brown stands for the proposition that courts look at age-based speech restrictions skeptically.

At hearing, NetChoice also suggested that even if there is no per se rule against age-based speech restrictions, further factual development would not be helpful here because the core of age assurance's unconstitutionality stems from fundamental human nature. According to NetChoice, people are less likely to do something if it requires an additional step, and age assurance necessarily entails additional steps and barriers. By putting up additional barriers, age assurance deters adults from accessing speech. To be sure, “fact-intensive inquiries that depend on further factual development may nevertheless be ripe if . . . that development would do little to aid the court's decision.” Educ. Credit Mgmt. Corp. v. Coleman (In re Coleman), 560 F.3d 1000, 1009 (9th Cir. 2009). But here, NetChoice's explanation for why further fact development would be unhelpful depends on the assumption that all age assurance tools must require extra steps a user would be unwilling to take. As just discussed, that is not necessarily the case. Age assurance processes that run in the background may not appreciably depress adults' access to speech at all. Therefore, the Court finds that NetChoice's challenge to SB 976's age assurance provisions is not fit for judicial review.

b. Hardship

Nevertheless, lack of fitness may be overcome if withholding review would result in hardship. Okleveuha, 676 F.3d at 838. Plaintiffs must demonstrate a high level of hardship, though, to surmount a finding that the issues are not yet fit for review. Hardship “does not mean just anything that makes life harder; it means hardship of a legal kind, or something that imposes a significant practical harm upon the plaintiff.” Colwell, 558 F.3d at 1128 (quoting Nat. Res. Def. Council v. Abraham, 388 F.3d 701, 706 (9th Cir. 2004)). It requires a “significant change in the plaintiffs' conduct of their affairs with serious penalties attached to noncompliance.” Stormans, Inc. v. Selecky, 586 F.3d 1109, 1126 (9th Cir. 2009) (quoting Ass'n of Am. Med. Colls. v. United States, 217 F.3d 770, 783 (9th Cir. 2000)). Hardship must also be urgent enough to “pose an immediate dilemma.” Colwell, 558 F.3d at 1128 (quoting Ass'n of Am. Med. Colls., 217 F.3d at 783). If hardship is not “direct and immediate,” there is little harm to delaying judicial review. Stormans, 586 F.3d at 1126 (quoting MFS Intelenet, 193 F.3d at 1118). In sum, hardship must be sufficiently serious and immediate to overcome a lack of fitness for review.

NetChoice has not met its burden to show hardship sufficient to overcome a lack of fitness for review. SB 976's age assurance provisions do not take effect until January 1, 2027, giving NetChoice and its members two years before compliance becomes an issue. Certainly, covered companies must begin their compliance efforts before SB 976's provisions become effective. That is the only way to ensure that those companies will have proper age assurance infrastructure in place by the time SB 976 requires it. However, nothing in the record indicates that those efforts must start now as opposed to waiting until after Defendant issues the relevant regulations. NetChoice asserts that developing age assurance tools will be “costly [and] time-consuming,” Cleland Decl. ¶ 29, but it is not clear how costly and how time-consuming it will be. If developing age assurance tools would take six months, covered companies could wait longer before starting development than if developing such tools would take a year and a half. Moreover, given the increased attention paid to age assurance recently, there well might be vendors offering turnkey age assurance solutions that can be deployed rapidly. See Amicus Br. 8. That possibility further mitigates any urgency.

Accordingly, the Court finds that NetChoice's challenge to age assurance is not prudentially ripe. That is not to say that the challenge cannot become ripe at a later time if the effective date for age assurance approaches and Defendant still has not issued regulations. At some point, covered companies will have to begin implementing age assurance even if regulations have not yet been published. But NetChoice and its members are not at that point now. It is therefore prudent to wait for further developments.

B. Facial First Amendment Challenges

Facial challenges to a law's constitutionality are disfavored. Wash. State Grange v. Wash. State Republican Party, 552 U.S. 442, 450 (2008). For that reason, the Supreme Court “has [] made facial challenges hard to win.” Moody v. NetChoice, LLC, 603 U.S. 707, 723 (2024). The standard for prevailing on a facial challenge is “less demanding” in the First Amendment context, but it is still “rigorous.” Id. To succeed, a plaintiff must show that “a substantial number of [a challenged statute's] applications are unconstitutional” relative to “the statute's plainly legitimate sweep.” Id. (quoting Ams. for Prosperity Found. v. Bonta, 594 U.S. 595, 615 (2021)). This analysis proceeds in two steps. Courts start by “assess[ing] the state laws' scope” and identifying the activities and actors being regulated. Id. at 724. Then, courts must “decide which of the laws' applications violate the First Amendment, and [] measure them against the rest.” Id. at 725. If the unconstitutional applications substantially outnumber the constitutional ones, the laws are facially unconstitutional.

Below, the Court addresses NetChoice's facial challenge to each provision of SB 976- restrictions on personalized feeds (§ 27001), limitations on the timing of notifications (§ 27002(a)), requirements to develop certain default settings (§ 27002(b)), and compelled disclosure of statistics related to minors' social media use (§ 27005)-in turn.

1. Personalized Feeds

a. Covered Companies' Own Expression

NetChoice's main argument against the personalized feed provisions is that those provisions restrict social media platforms' own speech. From NetChoice's perspective, personalized feeds are inherently expressive, so SB 976's restrictions on those feeds impede free speech. The Court concludes that NetChoice has not shown a likelihood of success on that issue because it has failed to meet its burden of demonstrating, as Moody requires for facial challenges, that most or all personalized feeds covered by SB 976 are expressive and therefore implicate the First Amendment.

NetChoice claims that it satisfies this burden because Moody held that, as a matter of law, “[d]eciding on the third-party speech that will be included in or excluded from a compilation-and then organizing and presenting the included items-is expressive activity of its own.” Moody, 603 U.S. at 731. And that is precisely what personalized feeds do: compile and organize speech from social media users. On the surface, this argument accords with older cases holding that the exercise of editorial judgment (i.e., deciding what speech to publish and how to organize it) is usually protected by the First Amendment. For example, in Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241 (1974), the Supreme Court held that states could not compel newspapers to provide political candidates with a right of reply. As it explained, doing so “intru[des] into the function of editors.” Id. at 258. So too in Turner Broadcasting System, Inc. v. FCC, 512 U.S. 622 (1994). There, the Supreme Court held that cable operators were engaging in expressive activity when choosing which channels to carry because they were “exercising editorial discretion over which stations or programs to include.” Id. at 636 (citation omitted).

That argument reads too much into Moody and its forebears. Although it is true that Moody uses sweeping language that could be interpreted as saying that all acts of compiling and organizing speech, with nothing more, are protected by the First Amendment, Moody also expressly discusses situations where such activity did not receive protection. For instance, Moody observed that a mall could not claim a First Amendment right to exclude pamphleteers from its property because the mall was not “engaged in any expressive activity” when trying to exclude the pamphleteers and their speech. Moody, 603 U.S. at 730 (citing PruneYard Shopping Ctr. v. Robins, 447 U.S. 74 (1980)). Moody also explained that law schools could not claim a First Amendment right to exclude military recruiters from on-campus recruiting because those law schools were “not [] engaged in expression” and were “not speaking when they host [job] interviews.” Id. at 731 (quoting Rumsfeld v. F. for Acad. & Institutional Rts., Inc., 547 U.S. 47, 64 (2006)). That is, Moody stands only for the proposition that restrictions on a private speaker's ability to compile and organize third-party speech implicate speech rights only if those restrictions impair the speaker's own expression. Moody, 603 U.S. at 730-31; see also id. at 740 (no First Amendment issues with requiring inclusion of speech when “the host of the third-party speech was not itself engaged in expression”). Justice Alito, joined by Justices Thomas and Gorsuch, further highlighted this point in his Moody concurrence. Id. at 780-83 (Alito, J., concurring).

This limit on the protections offered to speech compilations makes good sense. The touchstone of First Amendment speech rights is, after all, the protection of expression. See Sorrell v. IMS Health Inc., 564 U.S. 552, 567 (2011) (“It is true that restrictions on protected expression are distinct from restrictions on economic activity or, more generally, on nonexpressive conduct.”) (emphasis added). Activities that are expressive in one context are not necessarily expressive in all others, and courts should be careful to distinguish between “‘speech' and ‘nonspeech' elements [that] are combined in the same course of conduct” when analyzing First Amendment issues. United States v. O'Brien, 391 U.S. 367, 376 (1968). Consequently, even post-Moody, courts must inquire into whether an act of compiling and organizing third-party speech is expressive before they can determine whether that act receives First Amendment protection.

In response to this conclusion, NetChoice urges the Court to find that personalized feeds are always expressive even if not all acts of compiling and organizing third-party speech are expressive. From its perspective, the work that personalized feeds do is closely analogous to the editorial activities that Moody, Tornillo, Turner, and other similar precedents have found to be protected. There is some force to this sugggestion. “‘[T]he basic principles of freedom of speech and the press . . . do not vary' when a new and different medium for communication appears.” Brown, 564 U.S. at 790 (quoting Joseph Burstyn, Inc. v. Wilson, 343 U.S. 495, 503 (1952)). So “analogies to old media, even if imperfect, can be useful.” Moody, 603 U.S. at 733. Yet, this does not mean that courts should uncritically assume that analogies to old media are always apt and that there is little meaningful difference between old and new. Quite the opposite. The Supreme Court has cautioned lower courts against reflexively “import[ing] law developed in very different contexts into a new and changing environment.” Denver Area Educ. Telecomms. Consortium, Inc. v. F.C.C., 518 U.S. 727, 740 (1996). It is not enough that a new function or activity (personalized feeds) “seems roughly analogous to a more familiar example from [] precedent” (editorial discretion). Moody, 603 U.S. at 749 (Jackson, J., concurring). While the same “settled principles about freedom of expression” apply regardless of the technology at issue, id. at 733, “new circumstances requir[e] different adaptations of prior principles and precedents.” Denver Area Educ. Telecomms., 518 U.S. at 740 (emphasis added). Indeed, the Supreme Court has a history of developing special rules for addressing free speech concerns in different and newly arising contexts. Id. at 741 (collecting cases).

With that caution in mind, the Court finds that old precedents on editorial discretion do not fully resolve the issue at hand regarding the expressiveness of personalized feeds. For instance, Tornillo involved editorial discretion in the traditional sense: human newspaper editors deciding what articles to publish and where to place them in the paper based on the editors' own judgments about newsworthiness and how the proposed articles fit their newspaper's journalistic point of view. The Tornillo decision left this unspoken, but that is likely because, in 1974 when the case was decided, there was no other way to make editorial decisions. Thus, editorial discretion and human judgments about the value of speech (whether based on the speech's importance, truth, entertainment, or some other criteria) were one and the same. Personalized feeds on social media platforms are different. Rather than relying on humans to make individual decisions about what posts to include in a feed, social media companies now rely on algorithms to automatically take those actions.

Due to these differences between traditional and social media, the Supreme Court was careful not to overextend itself in Moody. The First Amendment questions in Moody involved restrictions on content moderation policies that embodied human value judgments about the types of messages to be disfavored. When the social media platforms in Moody removed posts for violating their community standards, it was because people at those platforms found those messages to contain vile or dangerous ideas-such as support for Nazi ideology, glorification of gender violence, or advancement of phony medical treatments. Id. at 735-37. The content moderation policies at issue were much like the traditional forms of editorial discretion discussed in Tornillo and other prior precedents. This close analogy led the Supreme Court to hold that the First Amendment prevents governments from displacing the social media platforms' content moderation policies with the governments' own view of how different messages should be balanced. Id. at 741. Moody therefore had no occasion to address, and explicitly declined to address, “feeds whose algorithms respond solely to how users act online-giving them the content they appear to want, without any regard to independent content standards.” Id. at 736 n.5.

The question that Moody reserved is precisely the question that the Court must answer here. NetChoice seeks to avoid the question by claiming that the feeds at issue in this case do not “respond solely to how users act online.” That may be so, but it is not in the record. And nothing about the definition of covered feeds requires such feeds to do more than “respond solely to how users act online.” Indeed, a feed responding only to user activity would undoubtedly be covered under SB 976 because it recommends content based on “information provided by the user,” Cal. Health & Safety Code § 27000.5(a).

Moody discussed feeds on Facebook and YouTube and found them to contain expressive elements. 603 U.S. at 733-40, 744. It is not clear whether the Court can simply import Moody's findings into this case. But even if the Court could do so, those findings would only address two feeds out of many. Those findings alone would not be enough for NetChoice to meet its burden on a facial challenge of showing that most of SB 976's applications to personalized feeds are unconstitutional. That exercise would require, at a minimum, NetChoice to show that most feeds contain expressive elements. NetChoice has not provided the kind of wide-ranging record on the entire spectrum of personalized feeds in existence that would be necessary to make that showing.

Of course, the fact that Moody reserved judgment on feeds that “respond solely to how users act online” does not mean that such feeds must be non-expressive and unprotected. If applying SB 976 to such feeds uniformly implicates expression, then NetChoice might yet satisfy its facial burden. However, NetChoice made virtually no argument about whether that is so. For this reason alone, NetChoice has failed to meet its burden.

Regardless, the Court doubts that regulating feeds responding solely to user actions would uniformly interfere with expression. Regulations intrude on a speech compiler's expression when those regulations force the compiler to “alter its own message,” Pac. Gas & Elec. Co. v. Pub. Utils. Comm'n of Cal., 475 U.S. 1, 16 (1986), or prevent the compiler from sending its desired message. See Hurley v. Irish-Am. Gay, Lesbian & Bisexual Grp. of Bos., Inc., 515 U.S. 557, 57475 (1995) (parade organizer had a First Amendment right to express disapproval of a message by excluding it from the parade, and state law could not compel the organizer to forego sending that message of disapproval by requiring the organizer to allow the message in). In the latter circumstance, the message that a compiler seeks to send need not be particularly articulate, id. at 574, but to trigger the First Amendment, there must clearly be a message of some sort. Id. (“The message [the parade organizer] disfavored is not difficult to identify.”). For social media platforms, that message might be “the overall speech environment” created through the “messages that [allowed] posts convey in toto.” Moody, 603 U.S. at 739.

When it comes to feeds that recommend posts based solely on prior user activity, there is no apparent message being conveyed. At the outset, the Court observes that SB 976 restricts only the use of certain personalized information when compiling posts into feeds. SB 976 does not prevent social media platforms from carrying out content moderation like that discussed in Moody. Platforms can continue to remove posts containing disfavored ideas, such as racist ideology, while still complying with SB 976 because content moderation depends on “independent content standards” separate from a user's personal information. Id. at 736 n.5. As such, it is difficult to say that restricting use of personalized feeds would alter the overall speech environment on any social media platform in any appreciable way. In addition, it is also challenging to identify how personalization, as opposed to content moderation, might send any message.

NetChoice suggested at hearing that personalization of feeds conveys a message that the user receiving personalized recommendations might find a post interesting. That can undoubtedly be true. Algorithms operate personalized feeds, and humans create algorithms to achieve certain purposes. To do so, humans build rules into those algorithms that they believe will best serve their purposes. In that sense, an algorithm simply acts as a tool to implement a conscious human choice. Id. at 745-46 (Barrett, J., concurring). If a human designs an algorithm for the purpose of recommending interesting posts on a personalized feed, the feed probably does reflect a message that users receiving recommended posts are likely to find those posts interesting. This perspective suggests that an algorithm designed to convey a message can be expressive.

But what if an algorithm's creator has other purposes in mind? What if someone creates an algorithm to maximize engagement, i.e., the time spent on a social media platform? At that point, it would be hard to say that the algorithm reflects any message from its creator because it would recommend and amplify both favored and disfavored messages alike so long as doing so prompts users to spend longer on social media. Amicus Br. 5 (collecting news articles). To the extent that an algorithm amplifies messages that its creator expressly disagrees with, the idea that the algorithm implements some expressive choice and conveys its creator's message should be met with great skepticism. Moreover, while a person viewing a personalized feed could perceive recommendations as sending a message that she is likely to be interested in those recommended posts, that would reflect the user's interpretation, not the algorithm creator's expression. If a third party's interpretations triggered the First Amendment, essentially everything would become expressive and receive speech protections-a good lawyer would almost certainly be able to assign some plausible meaning to any action. Yet, the Supreme Court has made clear that there is not “limitless variety of conduct can be labeled ‘speech' [even when] the person engaging in the conduct intends thereby to express an idea.” O'Brien, 391 U.S. at 376.

The increasing prominence of artificial intelligence (AI) and machine learning only further complicates matters. As Justice Alito observed, “when AI algorithms make a decision, ‘even the researchers and programmers creating them don't really understand why the models they have built make the decisions they make.'” Moody, 603 U.S. at 795 (Alito, J., concurring) (quoting Tammy Xu, AI Makes Decisions We Don't Understand-That's a Problem (Jul. 19, 2021), https://builtin.com/artificial-intelligence/ai-right-explanation); see also id. at 746 (Barrett, J., concurring). This is an important observation because it makes it challenging to determine whether AI algorithms convey human expression. Imagine an AI algorithm that is designed to remove material that promotes self-harm. To set up that algorithm, programmers need to initially train it with data that humans have labeled ahead of time as either unacceptably promoting selfharm or not. Federal Judicial Center, An Introduction to Artificial Intelligence for Federal Judges 15 (2023), https://www.fjc.gov/sites/default/files/materials/47/AnIntroductiontoArtificial IntelligenceforFederalJudges.pdf. Thus, when that AI algorithm initially begins to operate, it will reflect those human judgments, and courts can plausibly say that it conveys a human's expressive choice. But as the algorithm continues to learn from other data, especially if the humans are not supervising that learning, that conclusion becomes less sound. Rather than reflecting human judgments about the messages that should be disfavored, the AI algorithm would seem to reflect more and more of its own “judgment.” Thus, it would become harder to say that the algorithm implements human expressive choices about what type of material is acceptable.

While the Court uses the word “judgment,” it is not at all clear that, in any relevant sense, an AI algorithm can reason through issues like a human can.

Up to this point, the Court has focused on content moderation and feeds that respond solely to user activity as a dichotomy, as if the personalized feeds regulated by SB 976 must be one or the other. But a personalized feed might recommend posts based on both content moderation policies and user activity, or both expressive and non-expressive factors. NetChoice does not address the possibility of “mixed” feeds even though such feeds raise numerous legal and factual questions. For instance, the relative weight assigned to expressive and non-expressive factors in an algorithm might be relevant to free speech issues. “[T]he First Amendment does not prevent restrictions directed at . . . conduct from imposing incidental burdens on speech.” Sorrell, 564 U.S. at 567. Regulating feeds that use algorithms mostly relying on non-expressive factors may not trigger First Amendment scrutiny at all because doing so only incidentally burdens any expressive component of those algorithms. Or, if it is very easy to separate an algorithm's expressive content moderation functions from non-expressive user-activity-based functions, a law prohibiting personalized feeds from relying on user activity information may also only incidentally burden speech. A covered company could just remove the user activity factors from the recommendation algorithms driving its media feeds. This latter possibility is especially significant here because SB 976 targets only recommendations based on user information. It does not prohibit covered entities from incorporating their content moderation guidelines into their recommendation algorithms.

In short, much of the First Amendment analysis depends on a close inspection of how regulated feeds actually function. Because NetChoice has not made a record that can be used to address these important questions, it has not met its burden to show facial unconstitutionality.

b. User's Access to Speech

In addition to claiming that SB 976's personalized feed restrictions limit its members' own speech, NetChoice also claims that SB 976 limits social media users' access to speech. Although NetChoice may raise the rights of its members' users, Virginia v. Am. Booksellers Ass'n, 484 U.S. 383, 392-93 (1988), it has not shown that those rights are implicated by the personalized feed restrictions here. As the discussion in the previous section makes clear, personalized feeds are not necessarily a form of social media platforms' speech, so restricting personalized feeds does not restrict access to those platforms' speech. That said, NetChoice does not focus just on users' access to personalized feeds. According to NetChoice, restricting personalized feeds also restricts users' ability to access speech from the social media posts that they otherwise would have seen in their feeds. The theory seems to be that there is so much speech online that most of it is inaccessible as a practical matter; only if someone (like a social media platform) brings a post to a user's attention (say, through a personalized feed) is that post accessible. That is a curious argument because all posts are still, in fact, available to all users under SB 976. SB 976 does not require removal of any posts, and users may still access all posts by searching through the social media platforms. NetChoice has not offered any authority suggesting otherwise, and the Court is skeptical that speech becomes inaccessible simply because someone needs to proactively search for it. If that were the case, library books would be inaccessible unless a librarian recommends them because libraries hold too many books for a single person to sort through.

In conclusion, the Court finds that NetChoice has not met its burden to show likelihood of success on its First Amendment claim against SB 976's personalized feed provisions. NetChoice has failed to demonstrate a colorable facial claim based on the expressiveness of personalized feeds because it has not provided a record demonstrating that more personalized feeds than not are actually expressive. And NetChoice has not shown that SB 976 interferes with social media users' right to access speech because SB 976 does not remove any speech from social media platforms. So, the Court DENIES NetChoice's motion as to SB 976's personalized feed provisions.

2. Limits on Notifications

a. Level of Scrutiny

Unlike with personalized feeds, there is little question that notifications are expressive. So, the Court begins by determining the level of scrutiny that applies. One of the most fundamental principles of First Amendment law is that governments lack the “power to restrict expression because of its message, its ideas, its subject matter, or its content.” Reed v. Town of Gilbert, 576 U.S. 155, 163 (2015) (quoting Police Dep't of Chi. v. Mosley, 408 U.S. 92, 95 (1972)). As such, content-based laws “are presumptively unconstitutional and may be justified only if” they survive strict scrutiny. Id. By contrast, a content-neutral law need only survive intermediate scrutiny. Porter v. Martinez, 68 F.4th 429, 439 (9th Cir. 2023) (citing Turner, 512 U.S. at 642).

A law is content based in two circumstances. First, it is content based on its face if it “applies to particular speech because of the topic discussed or the idea or message expressed.” Reed, 576 U.S. at 165. However, a law is not content based merely because it requires some “examination of speech or expression” if that examination is “only in service of drawing neutral [] lines.” City of Austin v. Reagan Nat'l Advertising of Austin, LLC, 596 U.S. 61, 69, 73 (2022). A law is not content based unless it discriminates between different “topic[s], subject matter[s], or viewpoint[s].” Id. at 72 (citation omitted). In this sense, it is not quite accurate to say that all content-based laws face strict scrutiny. Rather, it is content-discriminatory laws that face strict scrutiny. Second, even if a law if not facially content based, it is still treated as content based if it “cannot be justified without reference to the content of the regulated speech, or [was] adopted by the government because of disagreement with the message the speech conveys.” Reed, 576 U.S. at 164 (cleaned up) (quoting Ward v. Rock Against Racism, 491 U.S. 781, 791 (1989)).

Here, the prohibition on notifications does not itself differentiate by content on its face. During school hours and at night, notifications of all kinds, about any topic, sending any message, are prohibited from being sent to minors. No particular message or subject receives favorable treatment. However, NetChoice argues that SB 976's coverage definition creates content-based distinctions in two ways: (a) by explicitly including companies that facilitate social interactions and (b) by explicitly excluding companies that facilitate consumer review sharing. Cal. Health & Safety Code §§ 27000.5(b)(1) (incorporating a definition from Cal. Bus. & Prof. Code § 22675), 27000.5(b)(2)(A). The Court concludes that neither of those classifications is content based.

Social interactions can run the entire gamut of topics and ideas. Users can interact socially about politics, sports, art, or any other subject. And during those interactions, users can express any message or viewpoint they like. On the other side of the coin, speech that is not a social interaction can also be about any topic and reflect any viewpoint. Thus, distinguishing between social interactions and other forms of speech is content neutral since doing so does not discriminate based on subject matter or viewpoint. To see this, one can imagine a travel blog with and without commenting functionality. The latter facilitates social interactions through its comment function while the former does not. But both blogs can contain the exact same blog posts.

For similar reasons, distinguishing between websites that are limited to consumer reviews and those that are not does not create a content-based distinction either. This is a harder question, because reviews convey certain information that non-reviews cannot; in particular, reviews involve some level of assessment of a good or service while non-reviews do not contain any type of assessment. As such, the review/non-review distinction is closely analogous to solicitation bans that the Supreme Court has found to be content neutral in the past. City of Austin, 596 U.S. at 72. Like reviews convey information that non-reviews cannot, a solicitation conveys information that non-solicitations cannot: a request to obtain something or an attempt to gain business. Id. (citation omitted). Yet solicitation bans are still content neutral when they do not “inherently present ‘the potential for becoming a means of suppressing a particular point of view.'” City of Austin, 596 U.S. at 72 (quoting Heffron v. Int'l Soc'y for Krishna Consciousness, Inc., 452 U.S. 640, 649 (1981)). Such is the case here. A review can discuss any topic and express any viewpoint on those topics just as a non-review can. For example, a book review about Keynesian economics can either praise or criticize the book's arguments, while a non-review can directly praise or criticize Keynesian theories as well. Accordingly, SB 976's notification provisions are content neutral on their faces.

The notification provisions are also content neutral when considering the legislative purpose behind SB 976. NetChoice argues otherwise, alleging that SB 976 reflects a legislative purpose to undermine speech about specific topics. In support, NetChoice points to an expert declaration submitted in this case explaining that social media can be harmful to minors because it exposes them to “violent, scary, or sexualized images.” Radesky Decl. ¶¶ 61(b)-(e), 91, ECF No. 18-2. If the legislature were motivated by a desire to prevent children from being exposed to such images, that would plainly be a content-based motivation. But this declaration was submitted as post hoc evidence in a lawsuit filed after SB 976 passed. Nothing in the record suggests that anyone in the California Legislature knew or was motivated by this connection to violent and sexualized images when the Legislature passed SB 976. In determining whether legislative motive was improper, courts look to stated legislative ends and other evidence that is contemporaneous with a law's enactment. See McCullen v. Coakley, 573 U.S. 464, 480 (2014) (referring to a law's stated purpose); Sorrell, 564 U.S. at 565 (2011) (analyzing a law's express purpose and legislative findings); Colacurcio v. City of Kent, 163 F.3d 545, 552 (9th Cir. 1998) (considering “objective indicators of intent, including the face of the statute, the effect of the statute, comparison to prior law, facts surrounding enactment, the stated purpose, and the record of proceedings” when determining whether the purpose of a law is to suppress speech) (citation and internal quotation marks omitted); Nat'l Rifle Ass'n of Am. v. City of L.A., 441 F.Supp.3d 915, 931 (C.D. Cal. 2019) (relying on the “text of the Ordinance, the Ordinance's legislative history, and the concurrent public statements made by the Ordinance's primary legislative sponsor” to find that the challenged law was intended to suppress speech). An improper reason that could have motivated legislation is not relevant unless there is evidence showing that the improper reason actually motivated the legislature.

As a fallback, NetChoice argues that SB 976 is subject to strict scrutiny because its coverage definition draws speaker-based distinctions. NetChoice is correct that the law draws speaker-based distinctions, but the fact that a law is speaker-based does not automatically trigger strict scrutiny. “[P]rovisions [that] distinguish between speakers” are not subject to strict scrutiny if they do not distinguish based “upon the messages [those speakers] carry.” Turner, 512 U.S. at 645. Only when “speaker preference reflects a content preference” does strict scrutiny apply. Reed, 576 U.S. at 170 (quoting Turner, 512 U.S. at 658). Here, for the reasons above, the distinctions that SB 976 draw do not reflect a content preference. Accordingly, intermediate scrutiny applies.

NetChoice also posits that SB 976 distinguishes between speakers by distinguishing between user- and provider-generated content. That does not reveal any content preference, though, because either type of content can be about any topic and espouse any view.

b. Application of Scrutiny

Defendant has the burden of establishing that SB 976 is likely to survive intermediate scrutiny. Such scrutiny has three steps. First, Defendant must demonstrate that SB 976 “‘further[s] an important or substantial government interest' that [] must be ‘unrelated to the suppression of free expression.'” Porter, 68 F.4th at 443 (quoting O'Brien, 391 U.S. at 377). Second, burdens on speech must be “no greater than is essential to the furtherance of that interest,” though a regulation “need not be the least restrictive or least intrusive means of serving that interest.” Id. (first quoting O'Brien, 391 U.S. at 377; and then quoting Ward, 491 U.S. at 798) (internal quotations omitted). Finally, the regulation must “leave open ample alternative channels for communication.” Id. (quoting Clark v. Cmty. for Creative Non-Violence, 468 U.S. 288, 293 (1984)). On the record assembled to date, Defendant has failed at the second step to show that SB 976's notification provisions are properly tailored.

Starting at the first step, the Court finds that Defendant has established an important government interest: the protection of children's health. See SB 976 § 1(b) (noting that social media platforms “pose a significant risk of harm to the mental health and well-being of children and adolescents”). As the Supreme Court recognizes, “the need to protect children from exposure” to harmful material is an “extremely important justification, one that [it] has often found compelling.” Denver Area Educ. Telecomms., 518 U.S. at 743 (collecting cases). That is a good start, but in claiming an interest in protecting children, governments must go beyond the general and abstract to prove that the activities they seek to regulate actually harm children. Brown, 564 U.S. at 799. In Brown, the government failed to do so because it provided evidence only of correlation, not causation. Id. at 799-800. However, Brown involved strict scrutiny, not intermediate scrutiny as applies here. Id. at 799. And in any case, Defendant in this case has provided evidence of both causation and correlation. His experts have pointed to both large observational studies that show correlation as well as to smaller experimental studies that show causation. See Feder Decl., ECF No. 18-2; Radesky Decl. All these studies show that social media can cause harms to minors.

Even so, Defendant has failed to show that SB 976's notification provisions are properly tailored because the provisions are extremely underinclusive. Even on intermediate scrutiny, underinclusiveness can be fatal on its own when severe enough. See Nat'l Inst. of Family & Life Advocs. v. Becerra, 585 U.S. 755, 773-74 (2018) (finding that a compelled speech provision failed intermediate scrutiny because it was “wildly underinclusive”); see also Brown, 564 U.S. at 802 (when applying strict scrutiny, a finding that a “regulation is wildly underinclusive . . . is alone enough to defeat it”). The motivation for banning notifications to minors during school hours and at night seems to be a concern that notifications will distract minors from school or interrupt their sleep. SB 976 § 1(d); see also, e.g., Radesky Decl. ¶¶ 24, 28-29, 33, 37-41. But if that is the case, why not prohibit all notifications during those hours? As NetChoice observed at hearing, a sports website such as ESPN can send notifications about, for instance, a minor's favorite team winning a national championship during prohibited hours, but Facebook could not send the same notification. Both notifications seem equally capable of causing distractions or sleep disruptions, so by allowing notifications from non-covered companies, SB 976 undermines its own goal. As a result, SB 976 appears to restrict significant amounts of speech for little gain.

There may well be an explanation for why Facebook's notification is more likely to distract students or disrupt sleep than ESPN's notification. The Court can speculate as to why that may be, but the Court's own speculations are not enough. It is Defendant's burden to show that SB 976 is likely to survive intermediate scrutiny by providing evidence. He has not done so.

As a result, the Court finds that NetChoice is likely to succeed on its First Amendment claim against SB 976's notification provisions. The remaining preliminary injunction factors also favor granting an injunction, as NetChoice's members would suffer irreparable harm from First Amendment violations, and there is a strong public interest in protecting free speech. Klein, 584 F.3d at 1208; Elsasser, 32 F.4th at 731. Therefore, the Court GRANTS a preliminary injunction as to SB 976's notification provisions.

3. Default Settings

Next, the Court turns to the five default settings that SB 976 requires covered entities to create. Three of them are effectively duplicative of SB 976's personalized feed and notification provisions. Cal. Health & Safety Code §§ 27002(b)(1) (notification provisions), (2) and (4) (personalized feed provisions). The Court's analysis above applies equally to those three settings. Accordingly, the Court ENJOINS § 27002(b)(1) but not §§ 27002(b)(2) and (4).

The two remaining default settings present different questions. The Court begins with § 27002(b)(3), which requires covered entities to create a setting that allows parents to “[l]imit their child's ability to view the number of likes or other forms of feedback.” This setting must be turned on by default. Notably, this setting would only restrict a minor's ability to view the number of likes and other forms of feedback, not the minor's ability to view the underlying likes and feedback. It far from obvious that this setting implicates the First Amendment at all because the underlying speech is still viewable. Further, the Court sees little apparent expressive value in displaying a count of the number of total likes and reactions. Even assuming the First Amendment applies, though, this setting would face at most intermediate scrutiny because it applies to all forms of feedback and is therefore content neutral. It easily survives intermediate scrutiny. The setting is motivated by an important interest-protecting minors from harms to self-esteem and mental health that arise when minors fixate on the number of positive reactions they receive. SB 976 § 1(c); see also, e.g., Radesky Decl. ¶¶ 53, 72-73, 88. The setting is also minimally restrictive because all the underlying speech is still available for viewing. By removing automatic counters, the setting only makes it harder to fixate on the number of likes received and therefore discourages minors from doing so. Finally, since the underlying reactions are still viewable, virtually no speech has been blocked. For these reasons, the Court DENIES a preliminary injunction as to § 27002(b)(3).

Lastly, § 27002(b)(5) requires covered entities to create a private mode setting that prevents users from viewing or responding to a child's posts unless that user is connected with the child on social media (such as becoming friends with the child on Facebook). This obviously regulates speech because it limits the ability of users to speak with minors on social media platforms. But it is content neutral because it does not discriminate based on message. And it subsequently survives intermediate scrutiny: It is well-known that adults on the internet can exploit minors through social media, and implementing a private mode would reduce that danger. It is not particularly restrictive because the minor can still speak to any user she wishes to if that user requests to connect and the minor accepts. And the ability for users to request to connect with minors leaves open adequate channels of communication. So, the Court also DENIES a preliminary injunction as to § 27002(b)(5).

4. Compelled Disclosures

Finally, the Court addresses SB 976's compelled disclosure requirement. Cal Health & Safety Code § 27005. Under this requirement, covered companies must annually disclose “the number of minor users of [their covered platforms], and of that total the number for whom the operator has received verifiable parental consent to provide [a covered] feed, and the number of minor users as to whom the controls set forth in Section 27002 are or are not enabled.” Id.

Defendant argues that this provision is constitutional under Zauderer v. Office of Disciplinary Counsel, 471 U.S. 626 (1985), because it compels information that is purely factual and uncontroversial. Zauderer, however, applies to “First Amendment claim[s] involving compelled commercial speech.” Am. Beverage Ass'n v. City & Cnty. of S.F., 916 F.3d 749, 756 (9th Cir. 2019) (en banc) (emphasis added). Commercial speech is “usually defined as speech that does no more than propose a commercial transaction.” United States v. United Foods, Inc., 533 U.S. 405, 409 (2001). Disclosing information about the number of minors using a social media platform does not in any way propose a commercial transaction.

That said, the inquiry does not end here as this propose-a-transaction definition is more of a “starting point” for identifying commercial speech than an exclusive definition. X Corp. v. Bonta, 116 F.4th 888, 900 (9th Cir. 2024) (quoting Ariix, LLC v. NutriSearch Corp., 985 F.3d 1107, 1115 (9th Cir. 2021)). The ultimate determination on whether speech is commercial or noncommercial depends on the unique facts of a case. In evaluating those facts, courts focus on three Bolger factors: (1) whether speech is an advertisement, (2) whether speech refers to a particular product, and (3) whether there is an economic motivation to speak. Hunt v. City of L.A., 638 F.3d 703, 715 (9th Cir. 2011) (citing Bolger v. Young Drug Prods. Corp., 463 U.S. 60, 66-67 (1983)). The inquiry into these three factors is holistic-not every one of those factors is required to find that speech is commercial. Bolger, 463 U.S. at 67 n.14.

In this instance, the speech required by § 27005 fails at least the first and third Bolger factors. The disclosures about minor users are plainly not advertisements. And because the compelled disclosures do not report “existing commercial speech, [] a social media company has no economic motivation in their content.” X Corp., 116 F.4th at 901. Only the second Bolger factor arguably supports a finding that the compelled disclosures are commercial speech because the number of minor users is related to the particular service that a social media company provides. But the compelled information does not seem commercially relevant. It is not like terms of service that a consumer might be interested in when deciding whether to use a social media platform. Nor does it give much insight into how covered entities run their social media platforms; rather, the disclosures report how users behave on those platforms. The disclosures also say nothing about the quality of features on those platforms that might be relevant to consumers deciding between different platforms. Regardless, the fact that the § 27005 disclosure does not accord with the “starting point” definition of commercial speech and does not meet at least two of the Bolger factors strongly suggests that it is not commercial speech. See id.

Consequently, Zauderer does not apply, and the Court must determine whether intermediate or strict scrutiny applies by assessing § 27005's content neutrality. The disclosure provision is content based because it requires covered entities to speak on specific topics but not others. Accordingly, strict scrutiny applies, and the Court finds that § 27005 fails strict scrutiny at least because it is not narrowly tailored. See Brown, 564 U.S. at 799. Defendant argues that SB 976 serves California's interest in protecting minors, but compelling disclosures about the number of minors using a social media platform makes no discernable contribution to that interest. The Court sees no reason why revealing to the public the number of minors using social media platforms would reduce minors' overall use of social media and associated harms. Nor does the Court see why disclosing statistics about parental consent would meaningfully encourage parents to withhold consent from social media features that might cause harm.

Therefore, NetChoice has shown that it is likely to succeed in its First Amendment claim against the compelled disclosure requirement. Being compelled to speak in violation of the First Amendment is irreparable harm, and the public interest cuts against compelling unwanted speech. See Klein, 584 F.3d at 1208; Elsasser, 32 F.4th at 731. So, the Court GRANTS a preliminary injunction as to SB 976's compelled disclosure provision.

C. As-Applied First Amendment Challenges

Having finished with NetChoice's facial challenges, the Court turns to NetChoice's as-applied challenges. Since the Court concluded that NetChoice is entitled to preliminary injunctions against SB 976's notification and compelled disclosure provisions based on NetChoice's facial challenges, there is no need to address the as-applied challenges to those provisions. Any relief that the as-applied challenges could yield would be subsumed within the injunction from the facial challenges. As for NetChoice's as-applied challenge to the default settings regarding the number of reactions and private mode, the Court sees no meaningful difference between the facial analysis and as-applied analysis. Accordingly, the Court DENIES any as-applied injunction to those two settings as well.

That leaves NetChoice's as-applied challenge to SB 976's personalized feed provisions. As the Court explained in detail above, NetChoice's facial challenge fails at this stage because the expressive qualities of personalized feeds may differ between social media platforms, and NetChoice has not exhaustively surveyed all the personalized feeds covered by SB 976. Supra Section III.B.1.a. But on an as-applied challenge, NetChoice's task would be less daunting since NetChoice would only need to detail the functioning of personalized feeds for the five members it raises those as-applied challenges on behalf of. NetChoice lacks associational standing to raise these as-applied challenges, though.

To have associational standing, NetChoice must show that “(a) its members would otherwise have standing to sue in their own right; (b) the interests it seeks to protect are germane to the organization's purpose; and (c) neither the claim asserted nor the relief requested requires the participation of individual members in the lawsuit.” Hunt, 432 U.S. at 343. There is little question that NetChoice meets the first two requirements. It is on the third requirement that its as-applied challenges fail. Challenges to SB 976's personalized feed provisions require deep factual inquiries into how a particular social media feed works. Supra Section III.B.1.a. In other words, each separate as-applied challenge on behalf of each separate NetChoice member requires its own “ad hoc factual inquiry.” Ass'n of Christian Schs. Int'l v. Stearns, 678 F.Supp.2d 980, 986 (C.D. Cal. 2008) (citation omitted), aff'd, 362 Fed.Appx. 640, 644 (9th Cir. 2010) (need for “individualized proof” defeats associational standing). For the Court to adjudicate these as-applied claims, it is therefore necessary for NetChoice's individual members to participate in this lawsuit, at the very least for the purpose of conducting discovery into how each of those members' feeds work. So, the Court finds that NetChoice lacks associational standing to raise as-applied challenges to SB 976's personalized feed provisions and DENIES a preliminary injunction on that basis.

D. Void for Vagueness Challenge

Finally, NetChoice argues that the Court should enjoin SB 976 in its entirety because it is too vague. Facial vagueness challenges, however, are difficult to win. “[P]erfect clarity and precise guidance have never been required even of regulations that restrict expressive activity.” Ward, 491 U.S. at 794. And close or difficult cases abound-that such cases can be imagined is not grounds to invalidate a law for vagueness. United States v. Williams, 553 U.S. 285, 305-06 (2008).

Here, NetChoice points to two sources of potential vagueness. First, it suggests that SB 976's definition of covered feeds is vague because it is unclear how the definition's exceptions interact with the base definition of a covered feed. There is nothing confusing about how the two interact. As long as any one of the exceptions is met, either “alone or in combination with one another,” a feed is not covered even if it otherwise meets the base definition. Cal. Health & Safety Code § 27000.5(a). Second, NetChoice claims that it is vague for SB 976 to define covered entities as those that offer covered feeds as a “significant part” of their services. Id. § 27000.5(b)(1). NetChoice claims that “significant” is open-ended and subjective. But qualitative words of degree like “significant” are common in our statutes. They are an unavoidable part of the law. So, “[a]s a general matter, [courts] do not doubt the constitutionality of laws that call for the application of a qualitative standard such as ‘substantial risk' to real-world conduct; ‘the law is full of instances where a man's fate depends on his estimating rightly . . . some matter of degree.'” Johnson v. United States, 576 U.S. 591, 603-04 (2015) (quoting Nash v. United States, 229 U.S. 373, 377 (1913)). For this reason, the Court DENIES a preliminary injunction on NetChoice's vagueness argument.

E. Severability

Because the Court found that some portions of SB 976 are likely unconstitutional while others are not, the Court ends by considering whether the likely unconstitutional provisions can be severed such that the Court can enjoin only those provisions. As SB 976 is a California law, the Court applies California's rules of severability. Sam Francis Found. v. Christies, Inc., 784 F.3d 1320, 1325 (9th Cir. 2015). Under California law, the presence of a severability clause creates a presumption that provisions are severable. Garcia v. City of L.A., 11 F.4th 1113, 1120 (9th Cir. 2021) (citing Cal. Redevelopment Ass'n v. Matosantos, 53 Cal.4th 231, 270 (2011)). Such a clause exists in SB 976. Cal. Health & Safety Code § 27007.

In addition, California law requires courts to evaluate whether “the invalid portion of a statute is ‘grammatically, functionally, and volitionally' severable from the valid remainder of the statute. NetChoice, 113 F.4th at 1124 (quoting Calfarm Ins. Co. v. Deukmejian, 48 Cal. 3D 805, 821 (1989)). A provision is grammatically severable when it is “distinct and separate” and “can be removed as a whole without affecting the wording of any other provision.” Calfarm, 48 Cal.3d at 822. That is the case here, where each likely unconstitutional provision exists in a separate section or subsection of SB 976. By contrast, functional severability is met when an invalid provision “is not necessary to the measure's operation and purpose.” Hotel Emps. & Rest. Emps. Int'l Union v. Davis, 21 Cal.4th 585, 613 (1999). Once more, that is the case here. The notification and compelled disclosure provisions are not prerequisites for any valid provision's operation. And part of the reason that the Court enjoined those provisions is that they would not contribute much to SB 976's purpose. Finally, volitional severability asks “whether the remainder [of a statute] would have been adopted by the legislative body had the latter foreseen the partial invalidation of the statute.” Matosantos, 53 Cal.4th at 271 (citation and internal quotations omitted). SB 976's express severability clause and the relative ineffectiveness of the provisions to be severed lead the Court to answer “yes.”

Therefore, the Court finds the likely unconstitutional provisions to be severable.

IV. CONCLUSION

For the reasons above, the Court GRANTS IN PART and DENIES IN PART NetChoice's motion for a preliminary injunction. Because the Court decided this motion on a highly abbreviated schedule and the parties had the opportunity to assemble only a thin record in that time, the Court emphasizes that this preliminary injunction order is just that-preliminary. Further factual development could well reveal that provisions enjoined now are constitutional, or that provisions not enjoined now are unconstitutional. For the time being, though, the Court ENJOINS Defendant from enforcing California Health and Safety Code §§ 27002(a), 27002(b)(1), and 27005. Defendant may enforce the remainder of the law.

IT IS SO ORDERED.


Summaries of

NetChoice v. Bonta

United States District Court, Northern District of California
Dec 31, 2024
5:24-cv-07885-EJD (N.D. Cal. Dec. 31, 2024)
Case details for

NetChoice v. Bonta

Case Details

Full title:NETCHOICE, Plaintiff, v. ROB BONTA, Defendant.

Court:United States District Court, Northern District of California

Date published: Dec 31, 2024

Citations

5:24-cv-07885-EJD (N.D. Cal. Dec. 31, 2024)