The Supreme Court of the United States will hear cases this year against the Communications Decency Act’s Section 230, which has been a cornerstone of the IT sector for more than two decades and could soon change free speech on the internet.
Must Watch: Would you live on 3D Printed Mars for a year for $60,000?
Elon Musk stated in the public when he made his offer to purchase Twitter for more than $40 billion that he wanted to make the social media platform “an inclusive arena for free speech.”
Musk’s actions since finalizing the transaction last year have shown how he sees the balance that internet companies must make between ensuring user safety and preserving free expression. While he lifted restrictions on many accounts that had been previously suspended, including the account of former president Donald Trump, he also imposed new restrictions on accounts belonging to journalists and others for posting publicly accessible flight information, which he equated to doxxing.
The controversy surrounding Musk’s purchase of Twitter has highlighted how difficult it is to decide what speech is actually protected. When it comes to online platforms, which make decisions that affect large segments of people from many cultures and legal systems around the globe, that topic is especially challenging.
The Supreme Court of the United States will hear cases this year that may help define the parameters of free expression on the internet. These cases may pressure Musk and other platform owners who control what comments are widely disseminated.
Subscribe to GreatGameIndia
The boundaries they will examine include whether social media platforms have the right to remove messages based on viewpoint and prevent their algorithms from promoting them, the extent of platforms’ obligations to do so, and whether the government has the right to impose online safety regulations that some civil society organizations worry may stifle vital resources and messages in order to avoid legal liability.
“The question of free speech is always more complicated than it looks,” said David Brody, managing attorney of the Digital Justice Initiative at the Lawyers’ Committee for Civil Rights Under the Law. “There’s a freedom to speak freely. But there’s also the freedom to be free from harassment, to be free from discrimination.”
When content moderation guidelines are changed, Brody advised that individuals think about “whose speech gets silenced when that dial gets turned? Whose speech gets silenced because they are too fearful to speak out in the new environment that is created?”
Tech’s liability shield under threat
The Communications Decency Act’s Section 230 has been a cornerstone of the IT sector for more than two decades. The law gives internet platforms a liability shield that shields them from being held accountable for the content posted by its users while simultaneously allowing them to control what stays up or is taken down.
The protections for the multibillion-dollar companies have come under increasing pressure from lawmakers on both sides of the aisle, despite the fact that industry leaders claim it’s what has allowed online platforms to grow and innovate. Many Democrats want platforms to remove more hateful content, while Republicans want to leave up more posts that support their viewpoints.
The Section 230 protection makes it simpler for platforms to let users share their opinions without worrying that they would be held liable for those statements. Additionally, it gives platforms comfort in knowing that they won’t face consequences if they decide to delete or downgrade content that they judge to be harmful or undesirable in some way.
The following situations pose a threat to Section 230’s effectiveness:
- Gonzalez v. Google: This is a Supreme Court case with the potential to alter the most popular business models of the internet that currently allow for a largely free-flowing stream of posts. The case, brought by the family of an American who was killed in a 2015 terrorist attack in Paris, seeks to determine whether Section 230 can shield Google from liability under the Anti-Terrorism Act, or ATA, for allegedly aiding and abetting ISIS by promoting videos created by the terrorist organization through its recommendation algorithm. If the court significantly increases the liability risk for platforms using algorithms, the services may choose to abandon them or greatly diminish their use, therefore changing the way content can be found or go viral on the internet. It will be heard by the Supreme Court in February.
- Twitter v. Taamneh: This Supreme Court case, which the justices will hear in February, doesn’t directly involve Section 230, but its outcome could still impact how platforms choose to moderate information on their services. The case, also brought under the ATA, deals with the question of whether Twitter should have taken more aggressive moderating action against terrorist content because it moderates posts on its site. Jess Miers, legal advocacy counsel at the tech-backed group Chamber of Progress, said a ruling against Twitter in the case could create an “existential question” for tech companies by forcing them to rethink whether monitoring for terrorist content at all creates legal knowledge that it exists, which could later be used against them in court.
- Challenges to Florida and Texas social media laws: Another set of cases deals with whether services should be required to host more content of certain kinds. Two tech industry groups, NetChoice, and the Computer & Communications Industry Association filed suit against the states of Florida and Texas over their laws seeking to prevent online platforms from discriminating on their services based on viewpoint. The groups argue that the laws violate the businesses’ First Amendment rights by forcing them to host objectionable messages even if they violate the company’s terms of service, policies, or beliefs. The Supreme Court has yet to decide if or when to hear the cases, though many watchers expect it will take them up at some point.
- Tech challenge to California’s kid’s online safety law: Separately, NetChoice also filed suit against California for a new law there that aims to make the internet safer for kids but that the industry group says would unconstitutionally restrict speech. The Age-Appropriate Design Code requires internet platforms that are likely to be accessed by children to mitigate risks to those users. But in doing so, NetChoice has argued, the state imposed an overly vague rule subject to the whims of what the attorney general deems to be appropriate. The group said the law will create “overwhelming pressure to over-moderate content to avoid the law’s penalties for content the State deems harmful,” which will “stifle important resources, particularly for vulnerable youth who rely on the Internet for life-saving information.” This case is still at the district court level.
The tension between the cases
The variety of these online speech cases highlights how difficult it is to control the environment.
“On the one hand, in the NetChoice cases, there’s an effort to get platforms to leave stuff up,” said Jennifer Granick, surveillance and cybersecurity counsel at the ACLU Speech, Privacy, and Technology Project. “And then the Taamneh and the Gonzalez case, there’s an effort to get platforms to take more stuff down and to police more thoroughly. You kind of can’t do both.”
It may be challenging for the Supreme Court to reconcile its decision with the Gonzalez case’s ruling if it ultimately agrees to hear arguments in the Texas or Florida social media legislation cases.
According to Samir Jain, vice president of policy at the Center for Democracy and Technology, a nonprofit organization that has received funding from tech companies like Google, if the Gonzalez case’s court rules that platforms can be held accountable for hosting certain types of user posts or promoting them through their algorithms, “that’s in some tension with the notion that providers are potentially liable for third-party content,” as the Florida and Texas laws suggest.
“Because if on the one hand, you say, ‘Well, if you carry terrorist-related content or you carry certain other content, you’re potentially liable for it.’ And they then say, ‘But states can force you to carry that content.’ There’s some tension there between those two kinds of positions,” Jain said. “And so I think the court has to think of the cases holistically in terms of what kind of regime overall it’s going to be creating for online service providers.”
Jim Steyer, the CEO of Common Sense Media, blasted the arguments put forth by organizations with ties to the internet industry who are opposed to the California law and other steps to protect children online. Although he also noted criticism from outside sources, he cautioned against letting “perfect be the enemy of the good.”
“We’re in the business of trying to get stuff done concretely for kids and families,” Steyer said. “And it’s easy to make intellectual arguments. It’s a lot tougher sometimes to get stuff done.”
How degrading Section 230 protections could change the internet
Regardless of how the courts rule in these instances, any weakening of Section 230 safeguards would probably have a real impact on how internet companies conduct business.
Google cautioned that denying YouTube Section 230 rights in the Gonzalez case “could have devastating spillover effects” in its brief (pdf below) submitted to the Supreme Court on January 12.
“Websites like Google and Etsy depend on algorithms to sift through mountains of user-created content and display content likely relevant to each user,” Google wrote. It added that if tech platforms were able to be sued without Section 230 protection for how they organize information, “the internet would devolve into a disorganized mess and a litigation minefield.”
According to Google, such a shift would also reduce internet security and support for free speech.
“Without Section 230, some websites would be forced to overblock, filtering content that could create any potential legal risk, and might shut down some services altogether,” General Counsel Halimah DeLaine Prado wrote in a blog post summarizing Google’s position. ”That would leave consumers with less choice to engage on the internet and less opportunity to work, play, learn, shop, create, and participate in the exchange of ideas online.”
Even if Google technically prevails at the Supreme Court, according to Miers of Chamber of Progress, it’s likely that the justices will try to “split the baby” by adopting a new standard for determining whether Section 230 protections should be applicable, such as in the case of algorithms. According to Miers, a decision like that would seriously impair the ability of the law to quickly put an end to legal actions brought against platforms for hosting third-party content.
“Now we’re going to get in a situation where every case plaintiffs bringing their cases against internet services are going to always try to frame it as being on the other side of the line that the Supreme Court sets up,” Miers said if the court attempts to create such a divide. “And then there’s going to be a lengthy discussion of the courts asking, well does Section 230 even apply in this case? But once we get to that lengthy discussion, the entire procedural benefits of 230 have been mooted at that point.”
According to Miers, if the Supreme Court ultimately decides to weaken the 230 safeguards and permit a fragmented legal system for content filtering to continue, it might serve as a catalyst for Congress to take up the new issues. She pointed out that Section 230 itself was born out of the awareness of new legal complications brought about by the emergence of the internet by two bipartisan politicians.
“Maybe we have to sort of relive that history and realize that, oh, well, we’ve made the regulatory environment so convoluted that it’s risky again to host user-generated content,” Miers said. “Yeah, maybe Congress needs to act.”
Read the report given below: