In a recent post, we addressed the US Department of Justice’s recent recommendations to reform Section 230 of the Communications Decency Act (CDA) to provide incentives for online platforms to address illicit material on their platforms, and the Platform Accountability and Consumer Transparency Act (PACT), legislation proposed by two US senators that is also aimed at reforming Section 230 of the CDA. Since the time of that post, we’ve continued to track developments regarding Section 230 of the CDA and we have some updates for our readers.
First, Federal Communications Commission Chairman Ajit Pai recently issued a statement indicating that the FCC intends to proceed with rulemaking action regarding the interpretation of Section 230 after “[m]embers of all three branches of the federal government have expressed serious concerns about the prevailing interpretation of the immunity” provided under Section 230 of the CDA. Chairman Pai explained, “[A]s elected officials consider whether to change the law, the question remains: What does Section 230 currently mean? . . . The Commission’s General Counsel has informed me that the FCC has the legal authority to interpret Section 230. Consistent with this advice, I intend to move forward with a rulemaking to clarify its meaning.”
Then, the US Supreme Court denied certiorari for a case that would have provided an opportunity for the Court to review the scope of Section 230 of the CDA. In conjunction with the denial, Justice Clarence Thomas released a statement agreeing with the Court's decision in this particular case, but explaining why he believes that, through a more appropriate case, the Court should “consider whether the text of this increasingly important statute aligns with the current state of immunity enjoyed by Internet platforms.”
Earlier this week, legislation was proposed in the House of Representatives regarding Section 230 of the CDA. According to its sponsors, the Protecting Americans from Dangerous Algorithms Act is intended to hold certain online platforms accountable if they use algorithms to amplify or recommend what the representatives call “harmful, radicalizing content that leads to offline violence.” This particular legislation would apply only to platforms with at least 50 million unique monthly users.
Finally, later this month, the Senate Committee on Commerce, Science, and Transportation is scheduled to conduct a hearing titled “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?” The hearing will, among other things, examine the legislative proposals for modernization and include testimony from major stakeholders in the technology industry.
Surely this won’t be the last we’re hearing about this divisive issue, and we’ll keep our readers apprised on meaningful updates as they become available.