It’s been a long time coming.
Facebook’s constant, nonstop reports of more and more users seem to have come to an end. Recently, Cnet reported of a drop in active users from 281 million to 279 million in Europe. The same article claims that Twitter’s numbers are also dropping off, will the platform losing 1 million users over the past three months.
These drops are more than likely a result of a recent influx of emergent scandals regarding both social media companies’ shady data manipulation practices – practices that both have been implementing for years, but have managed to escape the public eye, until now.
One practice, most used by Twitter, is known as the “Shadow Ban“.
According to a report on Project Veritas, a “Shadow Ban” is a way for Twitter to ban a user without the user knowing they have been banned. The user will not be notified of a ban, and can continue to like posts and make posts of their own – but any posts they make will not be visible to other users.
But what does a user need to do to get “Shadow Banned”? Pretty simple, according to Twitter Direct Messaging Engineer Parnay Singh, who revealed that shadow bans are given out automatically via an AI system – A system designed to target republican views.
“You have like five thousand words to describe a redneck…The majority of it are for Republicans.”
– Parnay Singh
Twitter Engineer Stephen Pierre was also quoted as saying the intention of the bans are to “Ban a way of talking”.
In theory, it’s more likely that these bans are done as a precautionary measure to catch people who, according to algorithmic behaviors, are very likely to post “Hate Speech” in the future. The shadow bans may be Twitter’s own way to encorage safe, polite discussion on their site.
In addition, no one likes reading things that they disagree with – one’s first reactions to an argument that clashes with their core values is known as the backfire effect, and those impacted by the effect instinctively refuse to reconstruct their opinions, rejecting the argument. This goes a long way to explaining why online “discussions” don’t often reach peaceful conclusions. And an online space where everyone is always fighting isn’t a space many would like to visit. It’s possible Twitter enabled “shadow banning” to decrease political arguments on its site, and make its public image a bit friendlier.
Still, while the company may have had reasons for their actions, banning those who disagree with certain opinions doesn’t sound like the best way to solve the problem.
While Facebook’s systems have not been so easily found out or named, systems of this caliber do exist there as well. For Facebook, the focus is less about banning viewpoints and more about giving certain – obviously incorrect – viewpoints much more credibility than they deserve. And we’re not talking about politically incorrect – Infowars is a conspiracy theorism site that posts stories which are often absurd and factually incorrect, but some have been close enough to reality to be feasible, and could be taken as the fact if a person was shown just the one article of Infowars’ main site page.
For example, a news report stating a man changed his gender on his driver’s license to qualify for cheaper insurance sounds vaguely plausible, but not so much when it’s next to articles claiming the moon landing and the September 11th attacks were faked.
If one visits the main site, they’ll quickly understand that it’s all literally fake news, but the problem comes when users can’t see the whole site. When Facebook gives Infowars a similar priority in the feeds of users as actual news sites like Fox or NBC, users could easily mistake these stories as the real thing. Over time, many have called for Infowars to be banned from Facebook, or at least tagged with some kind of flyer denouncing their information as fraudulent, but Facebook has yet to take action on the outcry.
This is a touchy subject, and one that’s been causing a lot of problems for Facebook. On one hand, a man expressing “Fake News” stories is his right to freedom of speech, and censoring him isn’t exactly in like with his constitutional rights. But on the other hand, this “Fake News” can be very dangerous if it’s not easy to recognize it as such.
These issues have been active for some time, but have not, until recently, been impacting the stocks or user counts of these sites. It appears that day has finally come, as this week has seen both the user counts and the stocks of two of the world’s most popular social media sites go into freefall.
The sites need to work fast to find solutions to their biggest issues and salvage what they can of their investors, as user counts continue to drop.