writing.exchange is one of the many independent Mastodon servers you can use to participate in the fediverse.
A small, intentional community for poets, authors, and every kind of writer.

Administered by:

Server stats:

335
active users

#contentmoderation

5 posts5 participants0 posts today

"Once again, several Senators appear poised to gut one of the most important laws protecting internet users - Section 230 (47 U.S.C. § 230).

Don’t be fooled - many of Section 230’s detractors claim that this critical law only protects big tech. The reality is that Section 230 provides limited protection for all platforms, though the biggest beneficiaries are small platforms and users. Why else would some of the biggest platforms be willing to endorse a bill that guts the law? In fact, repealing Section 230 would only cement the status of Big Tech monopolies.

As EFF has said for years, Section 230 is essential to protecting individuals’ ability to speak, organize, and create online.

Congress knew exactly what Section 230 would do – that it would lay the groundwork for speech of all kinds across the internet, on websites both small and large. And that’s exactly what has happened.

Section 230 isn’t in conflict with American values. It upholds them in the digital world. People are able to find and create their own communities, and moderate them as they see fit. People and companies are responsible for their own speech, but (with narrow exceptions) not the speech of others."

eff.org/deeplinks/2025/03/230-

Electronic Frontier Foundation · 230 Protects Users, Not Big TechOnce again, several Senators appear poised to gut one of the most important laws protecting internet users - Section 230 (47 U.S.C. § 230). Don’t be fooled - many of Section 230’s detractors claim that this critical law only protects big tech. The reality is that Section 230 provides limited...

University of South Australia: New AI model detects toxic online comments with 87% accuracy. “A team of researchers from Australia and Bangladesh has built a model that is 87% accurate in classifying toxic and non-toxic text without relying on manual identification. Researchers from East West University in Bangladesh and the University of South Australia say their model is an improvement on […]

https://rbfirehose.com/2025/03/22/university-of-south-australia-new-ai-model-detects-toxic-online-comments-with-87-accuracy/

ResearchBuzz: Firehose | Individual posts from ResearchBuzz · University of South Australia: New AI model detects toxic online comments with 87% accuracy | ResearchBuzz: Firehose
More from ResearchBuzz: Firehose
Looks like Mastodon is going to need better moderator tools

Underprivileged people are apparently especially easy to target on #ActivityPub , or so I have been told, and I believe it. They have been complaining about it to the Mastodon developers over the years, but the Mastodon developers at best don’t give a shit, at worst are hostile to the idea, and have been mostly ignoring these criticisms. Well, now we have “Nicole,” the infamous “Fediverse Chick”, a spambot that seems to be registering hundreds of accounts across several #Mastodon instances, and then once registered, sends everyone a direct message introducing itself.

You can’t block it by domain or by name since the name keeps changing and spans multiple instances. It is the responsibility of each domain to prevent registrations of bots like this.

But what happens when the bot designer ups the ante? What happens when they try this approach but with a different name each time? Who is to say that isn’t already happening and we don’t notice it? This seems to be an attempt to show everyone a huge weakness in the content moderation toolkit, and we are way overdue to address these weaknesses.

Gizmodo: Reddit Accused of Cracking Down on Luigi Mangione Content. “Slate recently noted that some forums on Reddit had been getting censored for their discussion of the accused killer. A moderator for one long-running subreddit, /r/popculture, was suspended by the platform after failing to quell the ongoing discussion of Mangione. The action followed an unusual warning from the site’s […]

https://rbfirehose.com/2025/03/17/gizmodo-reddit-accused-of-cracking-down-on-luigi-mangione-content/

ResearchBuzz: Firehose | Individual posts from ResearchBuzz · Gizmodo: Reddit Accused of Cracking Down on Luigi Mangione Content | ResearchBuzz: Firehose
More from ResearchBuzz: Firehose

Professor of international and public affairs at Columbia University, Tamar Mitts for @time breaks down why the fight against online extremism always fails and why we need to understand that the issue "is bigger than any one site can handle."

flip.it/qS0RGQ

TIME · Why the Fight Against Online Extremism Keeps FailingYes, Big Tech can do more. But all online spaces must commit to a more unified stance against extremism.

"Mark Zuckerberg might be done with factchecking, but he cannot escape the truth. The third richest man in the world announced that Meta will replace its independent factchecking with community notes. I went to the AI Action Summit in Paris this week to tell tech execs and policymakers why this is wrong.

Instead of scaling back programmes that make social media and artificial intelligence more trustworthy, companies need to invest in and respect the people who filter social media and who label the data that AI relies on. I know because I used to be one of them.

A mum of two young children, I was recruited from my native South Africa with the promise to join the growing tech sector in Kenya for a Facebook subcontractor, Sama, as a content moderator. For two years, I spent up to 10 hours a day staring at child abuse, human mutilation, racist attacks and the darkest parts of the internet so you did not have to.

It was not just the type of content I had to watch that gave me insomnia, anxiety and migraines, it was the quantity too. In Sama we had something called AHT, or action handling time. This was the amount of time we were given to analyse and rate a piece of content. We were being timed, and the company measured our success in seconds. We were constantly under pressure to get it right.

You could not stop if you saw something traumatic. You could not stop for your mental health. You could not stop to go the bathroom. You just could not stop. We were told the client, in our case Facebook, required us to keep going."

theguardian.com/commentisfree/

The Guardian · I was a content moderator for Facebook. I saw the real cost of outsourcing digital labourBy Sonia Kgomo

"Bluesky is a nascent “Twitter-like” and decentralized social media network with novel features and unprecedented data access. This paper provides a characterization of its interaction network, studying the political leaning, polarization, network structure, and algorithmic curation mechanisms of five million users. The dataset spans from the website’s first release in February of 2023 to May of 2024. We investigate the replies, likes, reposts, and follows layers of the Bluesky network. We find that all networks are characterized by heavy-tailed distributions, high clustering, and short connection paths, similar to other larger social networks. BlueSky introduced feeds—algorithmic content recommenders created for and by users. We analyze all feeds and find that while a large number of custom feeds have been created, users’ uptake of them appears to be limited. We analyze the hyperlinks shared by BlueSky’s users and find no evidence of polarization in terms of the political leaning of the news sources they share. They share predominantly left-center news sources and little to no links associated with questionable news sources. In contrast to the homogeneous political ideology, we find significant issues-based divergence by studying opinions related to the Israel-Palestine conflict. Two clear homophilic clusters emerge: Pro-Palestinian voices outnumber pro-Israeli users, and the proportion has increased. We conclude by claiming that Bluesky—for all its novel features—is very similar in its network structure to existing and larger social media sites and provides unprecedented research opportunities for social scientists, network scientists, and political scientists alike."

journals.plos.org/plosone/arti

journals.plos.orgBluesky: Network topology, polarization, and algorithmic curationBluesky is a nascent “Twitter-like” and decentralized social media network with novel features and unprecedented data access. This paper provides a characterization of its interaction network, studying the political leaning, polarization, network structure, and algorithmic curation mechanisms of five million users. The dataset spans from the website’s first release in February of 2023 to May of 2024. We investigate the replies, likes, reposts, and follows layers of the Bluesky network. We find that all networks are characterized by heavy-tailed distributions, high clustering, and short connection paths, similar to other larger social networks. BlueSky introduced feeds—algorithmic content recommenders created for and by users. We analyze all feeds and find that while a large number of custom feeds have been created, users’ uptake of them appears to be limited. We analyze the hyperlinks shared by BlueSky’s users and find no evidence of polarization in terms of the political leaning of the news sources they share. They share predominantly left-center news sources and little to no links associated with questionable news sources. In contrast to the homogeneous political ideology, we find significant issues-based divergence by studying opinions related to the Israel-Palestine conflict. Two clear homophilic clusters emerge: Pro-Palestinian voices outnumber pro-Israeli users, and the proportion has increased. We conclude by claiming that Bluesky—for all its novel features—is very similar in its network structure to existing and larger social media sites and provides unprecedented research opportunities for social scientists, network scientists, and political scientists alike.

So yesterday's Instagram incident was revealing: Meta can suddenly flood feeds worldwide with extremely violent content (even to children), then fix it "at the flick of a switch."

Yet for years they've insisted that effectively filtering harmful content, removing misinformation, and protecting users' mental health is too technically complex and costly to implement.

If you ask me, what happened yesterday has proven what we've actually already known (not that they're trying to hide this anymore): Meta's moderation challenges aren't technical limitations—they're business decisions. When motivated, they can act instantly. Their selective enforcement speaks volumes about their actual priorities.

"Meta CEO Mark Zuckerberg also said in January that the company was removing or dialing back automated systems that reduce the spread of false information. At the same time, Meta is revamping a program that has paid bonuses to creators for content based on views and engagement, potentially pouring accelerant on the kind of false posts it once policed. The new Facebook Content Monetization program is currently invite-only, but Meta plans to make it widely available this year.

The upshot: a likely resurgence of incendiary false stories on Facebook, some of them funded by Meta, according to former professional Facebook hoaxsters and a former Meta data scientist who worked on trust and safety.

ProPublica identified 95 Facebook pages that regularly post made-up headlines designed to draw engagement — and, often, stoke political divisions. The pages, most of which are managed by people overseas, have a total of more than 7.7 million followers.

After a review, Meta said it had removed 81 pages for being managed by fake accounts or misrepresenting themselves as American while posting about politics and social issues. Tracy Clayton, a Meta spokesperson, declined to respond to specific questions, including whether any of the pages were eligible for or enrolled in the company’s viral content payout program.

The pages collected by ProPublica offer a sample of those that could be poised to cash in."

propublica.org/article/faceboo

ProPublicaAs Facebook Abandons Fact-Checking, It’s Also Offering Bonuses for Viral Content
More from ProPublica

Tonight I read this excellent #Lemmy post
https://lemmings.world/post/21510510
regarding John Oliver's show night on the topic of "Facebook & Content Moderation".
https://youtu.be/nf7XHR3EVHo
#Meta #Facebook #ContentModeration #JohnOliver

The whole show is extremely informative about Meta, but being "Last Week Tonight with John Oliver" there were quite a few f-bombs. Only watch the video if you really want the whole story and are not sensitive to the language. #fbomb #fbombs

John Oliver promoted this website:
"How to change your settings
to make yourself less valuable to Meta"
https://johnoliverwantsyourraterotica.com/
I have already followed the steps.

The show also briefly mentioned #Signal, #Mastodon, #Pixelfed, and #BlueSky as Meta alternatives. #MetaAlternatives
Personally I am on the #Fediverse with #GoToSocial (same protocol as Mastodon - #ActivityPub) and #PixelFed.
As well as Lemmy, #Telegram, and #Matrix.
If you want information on these networks just ask me.

Like some I am still on Meta: Facebook and #Instagram because it's where many of our friends, family, and colleagues are, and don't plan leaving... yet. In the meantime we can lessen our value to Meta and their advertisers.
None of the alternatives I use have #advertisers or #algorithms. It's all #community driven, #funded, and guided by #hashtags and other means.

lemmings.worldJohn Oliver launches "Make yourself less valuable to Meta" website, suggests Signal, Mastodon, Pixelfed, and BlueSky as Meta alternatives - Lemmings.worldJohn Oliver cited a 5000% rise in search queries related to leaving Meta and deleting accounts. Among the topics mentioned in the analysis, attention was drawn to early Facebook’s naivete with regard to moderation requirements, the constitutional framework, and a history of governmental interference. Oliver debunks common right-wing “cry censorship” talking points, as well as the objective difficulty of moderation endeavors, and how direct threats by Trump may have influenced Zuckerberg’s turnaround. Oliver went on to suggest Signal, Mastodon, Bluesky, and Pixelfed as alternatives that “do not seem as desperate to fall in line with Trump”. For those reluctant to completely ditch Meta, Oliver revealed a new site [https://johnoliverwantsyourraterotica.com/] with step-by-step instructions to “make yourself less valuable to them”. The guide was a collaboration with the EFF, and includes settings’ tweaks for Facebook and Meta, whose 98% of revenue comes from micro-targeting ads, the host previously cited, to increase privacy, and recommends Firefox, Privacy Badger, as “other measures” to take in order “to block advertisers and other third parties from tracking you”. The segment [https://invidious.jing.rocks/watch?v=nf7XHR3EVHo] culminated in a mock advert, in which the new Meta’s approach to moderation is coined as “Fuck it”, and hints to racism, internet scams, and calls to genocide running rampant on Meta’s platforms. The clip reminds the origins of Facebook as a site to “rank college girls by hotness”, and its implication in genocide in Myanmar, which was more thoroughly discussed in an Oliver’s previous [https://invidious.jing.rocks/watch?v=OjPYmEZxACM] special on Facebook in 2018.

"Members of Meta’s oversight board — an independent body tasked with ruling on sensitive moderation issues — were not consulted on the U-turn in any way, four people said.

In the lead-up to the announcement, the board, which includes people such as former Danish prime minister Helle Thorning-Schmidt and ex-Guardian editor Alan Rusbridger, was also only given a cursory notification about the fact-checking announcement and no insight into the hate-speech changes, leaving many members feeling blindsided, the people said.

While the board’s co-chairs put out a statement saying it “welcomed” the news that Meta was reviewing its fact-checking programme, this did not reflect the views of many board members, they said, and in particular did not apply to their thinking on the hate speech policy shift."

ft.com/content/64f7a0d8-1b9c-4