According to an investigation from Reuters, Facebook is looking to make a number of changes to their algorithms to change how they process their data. It looks like they didn’t take the whole Infowars fiasco situation lightly.
Last week, the news / conspiracy theory platform Infowars was banned from Facebook after it violated one of the companies policies concerning hate speech and bullying. However, the company’s ban was only administered after a significant number of users became enraged with the posting of the platform and demanded its take-down, asking the question: Would the platform have been taken down if this community outcry did not exist?
And with these changes, it appears as if Facebook has been asking themselves similar questions – far before anyone else was asking them.
Take it from the woman responsible for managing the company’s responses to so-called ‘malicious actors’: Facebook product manager Tessa Lyons. In an interview with her, she said the new rating system is nothing new that was thrown together after the Infowars event but is a piece of technology that Facebook has been developing over the past year.
Lyons says development of such a system is necessary because users can no longer be trusted to accurately inform the platform whether or not a news story is actually fake: “It’s not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher.”
Lyons also suggests that the new system may not only allow Facebook to discredit those “angry flaggers” but will also punish those flaggers by rating them poorly on its trustworthiness score. And although the effects of this rating are not stated in the source, it may decrease the ‘value’ of a flag that such a user attributes to items. Moral of the story: Don’t flag something unless that thing really deserves to be flagged, or else Facebook won’t pay attention to you anymore. Boy who cried wolf.
This interview does not, however, mention how exactly the score will be calculated, but the way the scale is rated (From 0 to 1, 1 being very trustworthy) suggests that the calculation will be based mostly on machine algorithm rather than human interaction.
Lyons also notes that this new rating will not be the only factor that the platform will use to attempt to determine risk factor of a user: It’s only one of thousands of other systems, scores and ratings that Facebook engineers have in development that aim to help the algorithm track and understand the “risk” of a user. Scoring poorly on a number of “risk” tests could put a user’s questionable posts higher up on whatever lists are used by Facebook engineers when they comb through submissions to determine what does violate their terms of service and what does not.
This system is not in play yet, and no indication or estimation is given by Lyons as to when it will enter the system. However, due to the under-the-hood nature of many of these social networking systems, you probably won’t notice it when it does go live. And while its impact upon the social network may not be huge, its addition is more significant due to what it represents: A continued effort by Facebook to seek out, detect, and wipe its platform clean of dubious fake news.