- by foxnews
- 16 Nov 2024
A pair of cases going before the US supreme court this week could drastically upend the rules of the internet, putting a powerful, decades-old statute in the crosshairs.
At stake is a question that has been foundational to the rise of big tech: should companies be legally responsible for the content their users post? Thus far they have evaded liability, but some US lawmakers and others want to change that. And new lawsuits are bringing the statute before the supreme court for the first time.
Both cases were brought by family members of terrorist attack victims who say social media firms are responsible for stoking violence with their algorithms. The first case, Gonzalez v Google, had its first hearing on 21 February and will ask the highest US court to determine whether YouTube, the Google-owned video website, should be held responsible for recommending Islamic State terrorism videos. The second, which will be heard later this week, targets Twitter and Facebook in addition to Google with similar allegations.
While lawmakers across the aisle have pushed for reforms to the 27-year-old statute, contending companies should be held accountable for hosting harmful content, some civil liberties organizations as well as tech companies have warned changes to section 230 could irreparably debilitate free-speech protections on the internet.
Gonzalez v Google centers on whether Google can be held accountable for the content that its algorithms recommend, threatening longstanding protections that online publishers have enjoyed under section 230.
In the case of Twitter v Taameneh, family members of the victim of a 2017 terrorist attack allegedly carried out by IS charged that social media firms are to blame for the rise of extremism. The case targets Google as well as Twitter and Facebook.
Passed in 1996, section 230 protects companies such as YouTube, Twitter and Facebook from being held legally responsible for content posted by users. Civil liberties groups point out the statute also offers valuable protections for free speech by giving tech platforms the right to host an array of information without undue censorship.
A crackdown on algorithmic recommendations would impact nearly every social media platform. Most steered away from simple chronological feeds after Facebook in 2006 launched its Newsfeed, an algorithmically driven homepage that recommends content to users based on their online activity.
As arguments in the Gonzalez case began on Tuesday, justices seemed to strike a cautious tone on section 230, saying that changes could trigger a number of lawsuits. Elena Kagan questioned whether its protections were too sweeping, but she indicated the court had more to learn before making a decision.
Firms like Reddit, Twitter, Microsoft as well as tech critics like the Electronic Frontier Foundation have filed letters to the court arguing that making platforms liable for algorithmic recommendations would have grave effects on free speech and internet content.
A world record-holding fisherman from Kentucky has a new record pending after catching a muskie in Minnesota. He is sharing top locations across the U.S. where he finds monster fish.
read more