Would Self-Classification of Social Media Posts Improve the Key Problems in Moderating Online Speech?

Would Self-Classification of Social Media Posts Improve the Key Problems in Moderating Online Speech?

As Elon Musk works to reform Twitter while also revealing previous moderation activities as an example of how social media posts have been too powerful to restrict certain debates, content moderation is a hot issue in social media circles.

Musk did point out some alleged procedural issues, but the real question is: how do you solve them? What are the alternatives if small teams of executives who run the platforms cannot be trusted to make content decisions?

In general, Meta’s experiment with a panel of outside experts has been successful in the era of personalization in retail. However, its Oversight Board cannot decide on every content matter. And Meta still faces harsh criticism for perceived censorship and bias despite this alternative method of appeal.

Unless another route can be thought out, some aspects of decision-making will ineluctably fall on platform management.

 

Alternative feeds depending on individual tastes could also help with this.

 

Several platforms are investigating this. According to The Washington Post, TikTok is experimenting with a concept called “Material Levels” to prevent “adult” content from showing up in feeds for younger users.

In this sense, TikTok has been under increased criticism, especially about trends for hazardous challenges. That has resulted in the deaths of several young people who took part in reckless behavior.

 

Elon Musk has praised similar content management strategies as part of his larger “Twitter 2.0” agenda.

 

Users would self-classify their tweets as they are uploaded under Musk’s version, and readers would be able to provide a maturity grade of sorts to assist in moving potentially hazardous information into another category.

Users would then be able to choose from several degrees of experience on the app, from “safe,” which would filter out the more extreme remarks and debates, to “unfiltered,” which would give you the complete experience (Musk would go for “hardcore”).

Theoretically, this seems intriguing, but would users genuinely self-classify their tweets and consistently provide the proper ratings, making this a workable alternative for this kind of filtering?

Of course, the platform might impose penalties for incorrect or incomplete classification of your tweets. The most severe segmentation may be automatically applied to all tweets from repeat offenders. In contrast, information from others may be broadcast in both or all streams to reach the broadest possible audience.

Users would have to manually choose a categorization throughout the composition process, which might alleviate some worries.

But even if it did, hate speech would still be amplified on social media, and deadly movements would continue to be supported.

Most of the time, Twitter or other social media platforms have taken action to censor users. It has been due to the danger of damage and not always because the remarks themselves have offended people.

 

Whenever former president Donald Trump posted, for instance:

 

It was more worried that Trump’s followers may interpret his remark that “when the looting begins. The shooting starts” as, basically, a license to murder, with the President tacitly backing the use of lethal force to dissuade thieves.

Self-censorship or choosing a maturity grade for your postings won’t address this crucial problem. Since social networks logically don’t want their technologies to use to spread possible damage in this manner. They will only conceal such remarks from users who choose not to view them.

In other words, it serves to obfuscate rather than enhance security. However, the real issue is not that people are saying and wishing to say offensive things online but rather that others find such things offensive.

That’s not the problem, and keeping potentially offensive content hidden might help limit exposure. Especially for younger audiences on TikTok.it won’t stop people from using the social app’s enormous reach to spread hate and dangerous calls to action that can cause harm in the real world.

It’s essentially a piecemeal offering, a diluting of duty, which may have some effects in certain circumstances. But it doesn’t address Social Media Posts’ primary obligation to prevent the misuse of the tools and processes they have developed.

Because they already are and always will be. Social media posts have been used to support riots, military coups, political revolutions, civil unrest, and more.

 

This past week, Meta was subject to further legal action for enabling “violent and nasty messages in Ethiopia to proliferate on Facebook, inflaming the horrific civil conflict.” Victims of the ensuing violence are asking for $2 billion in damages.

 

Social media platforms may use to support actual, dangerous movements. So it’s not just about political viewpoints that you disagree with.

There will always be some responsibility on the platforms to define the regulations to ensure these worst-case scenarios handle. Thus, more than self-certification is likely to assist in such situations.

Either that or higher-level laws need to establish by governments and organizations created to assess the effects of such and take appropriate action.

However, contrary to what many supporters of “free speech” would have you believe. The fundamental problem is not whether social media sites enable users to express themselves freely and post anything they want, given the potential for social postings to be amplified. There will always be boundaries and guardrails, and sometimes they may well go beyond the bounds of the law.

There are no simple solutions, yet relying only on public opinion is unlikely to result in an overall improvement.

Leave a Comment

Your email address will not be published. Required fields are marked *