Connect with us

Hi, what are you looking for?

Social Media

Meta pressed to compensate war victims amid claims Facebook inflamed Tigray conflict

Meta’s CM3leon: Revolutionizing AI Image Generation

Meta pressed to compensate war victims amid claims Facebook inflamed the Tigray conflict. There are increasing calls for Meta to establish a compensation fund for those who lost their lives in the Tigray war, which Facebook is accused of inciting and which resulted in the deaths of over 600,000 people and the relocation of millions more throughout Ethiopia.

In light of growing concerns that the social media platform’s presence in “high-risk and conflict-affected areas” could “fuel advocacy of hatred and incite violence against ethnic and religious minorities” in new areas, rights organization Amnesty International has urged Meta to establish a fund that will also assist other victims of conflict worldwide in a new report. According to a study by Amnesty International, “Meta contributed to human rights abuses in Ethiopia.”

The newfound demand for compensation coincides with the upcoming start next week of a lawsuit in Kenya where Ethiopians are suing Meta for allegedly igniting the Tigray conflict and are seeking a $1.6 billion payout. An interested party in the case is Amnesty International.

Amnesty International has also requested that Meta publicly recognize and apologize for any part it may have contributed to violations of human rights during the conflict, as well as expand its content moderation capabilities in Ethiopia to include 84 languages in addition to the four it presently supports. The battle between the Tigray People’s Liberation Front (TPLF) and the federal government of Ethiopia and Eritrea increased in the country’s northern area, leading to the outbreak of the Tigray war in November.

The rights group claims that posts by Meta that dehumanized and discriminated against the Tigrayan community “became awash with content inciting violence and advocating hatred” on Facebook. The normalization of “hate, violence, and discrimination against the Tigrayan community” was attributed to Meta’s “surveillance-based business model and engagement-centric algorithms,” which put profit first and “engagement at all costs.”

As provocative, dangerous, and contentious information tends to get the most excellent attention from users, Meta’s content-shaping algorithms are tailored to enhance interaction, according to the research.

According to the report, which detailed the firsthand accounts of Tigray war victims, “in the context of the northern Ethiopian conflict, these algorithms fueled devastating human rights impacts, amplifying content targeting the Tigrayan community across Facebook, Ethiopia’s most popular social media platform, including content that advocated hatred and incited violence, hostility, and discrimination.”

According to Amnesty International, there are serious concerns associated with the use of algorithmic virality in conflict-prone areas since what happens online may readily translate into violence offline. They criticized Meta for putting interactions ahead of the well-being of Tigrayans, for providing inadequate moderation that allowed misinformation to proliferate on its platform, and for ignoring past alerts about Facebook’s potential for abuse.

The paper describes how scholars, the Facebook Oversight Board, civil society organizations, and Meta’s “Trusted Partners” warned of the potential for Facebook to fuel widespread violence in Ethiopia. Yet Meta disregarded their advice both before and during the crisis.

Digital rights organizations wrote to Meta in June 2020, four months before the war in the nation’s north, regarding harmful content on Facebook in Ethiopia. The letter warned that the content could “lead to physical violence and other acts of hostility and discrimination against minority groups.”

The letter made several suggestions, such as “stopping algorithmic amplification of content inciting violence, temporary changes to sharing functions, and a human rights impact assessment into the company’s operations in Ethiopia.”

Similar systemic shortcomings, according to Amnesty International, were seen in Myanmar, such as the adoption of an automated content removal system that was unable to read local typography and so let dangerous information remain online. Though the mistakes were comparable, this occurred three years before the conflict in Ethiopia.

Similar to Myanmar, the research claims that despite the country’s inclusion in Meta’s “tier system,” which was intended to direct the distribution of moderation resources, moderation was mishandled in the Northern African nation.

“Meta was slow to react to feedback from content moderators regarding terms that should be considered harmful, and it was unable to adequately moderate content in the main languages spoken in Ethiopia.” As a result, damaging information was occasionally permitted to remain on the site even after it was brought to the attention of Amnesty International since it was determined that it did not contravene Meta’s community guidelines.

It said that “content moderation is an important mitigation tactic, even though it alone would not have prevented all the harms stemming from Meta’s algorithmic amplification.”

In related news, the UN Human Rights Council recently released a report on Ethiopia. It found that even though Facebook labeled Ethiopia as an “at-risk” country, the company did not invest enough in staffing and language resources, took too long to remove harmful content, and did not have enough staffing and language skills. Facebook was also found to be “extremely poor at detecting hate speech in the main language of Ethiopia,” according to Global Witness research. Whistleblower Frances Haugen previously accused Facebook of “literally fanning ethnic violence” in Ethiopia.

The statement “We fundamentally disagree with the conclusions Amnesty International has reached in the report, and the allegations of wrongdoing ignore important context and facts” was Meta’s response to criticism that it had not taken adequate action to guarantee that Facebook was not used to incite violence. Ethiopia has been and still is one of our top priorities, and we have taken significant steps to stop offensive information from appearing on Facebook.

Feedback from regional civil society organizations and foreign organizations—many of whom we met in Addis Ababa this year and continue to engage with—directs our safety and integrity work in Ethiopia. A representative for Meta stated, “We hire people with local knowledge and experience, and we keep improving our capacity to identify illegal information in the most commonly spoken languages in the nation, such as Amharic, Oromo, Somali, and Tigrinya.

The actions taken by Meta, such as enhancing its language classification and content moderation systems and cutting back on reshares, according to Amnesty International, came too late and were “limited in scope as they did not “address the root cause of the threat Meta represents to human rights—the company’s data-hungry business model.”

It suggests that Meta’s “Trusted Partner” program be reorganized to ensure that human rights groups and civil society organizations have enough say in content decisions and that Meta’s platforms in Ethiopia are subject to human impact assessments. It further recommended that Meta “give users an opt-in option for using its content-shaping algorithms.” It encouraged Meta to cease the invasive gathering of personal data and information that violates human rights.

It does not, however, ignore Big Tech’s prevailing attitude to prioritize people, and it has urged governments to pass and implement laws and regulations to stop and penalize corporate abuses.

“States must now more than ever uphold their duty to safeguard human rights by drafting and implementing significant laws that will restrain the surveillance-based economic model.”

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

The future of technological innovation is here. Be the first to discover the latest advancements, insights, and reviews. Join us in shaping the future.
SUBSCRIBE

You May Also Like

TECH

Antivirus software administration is of the utmost importance in the field of cybersecurity. Even though Kaspersky Ultra Antivirus is well-known for its strong security...

AI

A number of sectors have been profoundly affected by the advent of artificial intelligence (AI), but the creation of AI chatbots stands out among...

TECH

Because of its superior products, Apple has maintained its position as the market leader in smartwatches. The most recent Apple Watch models, the Ultra...

AI

One of the most recent frontrunners in the field of AI-driven coding, Poolside, has made headlines after raising $500 million from prominent investors, including...

SUBSCRIBE

The future of technological innovation is here. Be the first to discover the latest advancements, insights, and reviews. Join us in shaping the future.