7

Civil rights groups pushed Facebook, Twitter, YouTube, TikTok to toughen disnfor...

 1 year ago
source link: https://www.washingtonpost.com/technology/2022/09/22/midterms-elections-social-media-civil-rights/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Advertisement
Close

Inside the civil rights campaign to get Big Tech to fight the ‘big lie’

For months, civil rights groups have unsuccessfully pleaded with Big Tech companies to bolster their election policies

September 22, 2022 at 7:00 a.m. EDT
midterms-elections-social-media-civil-rights
Protestors climb scaffolding outside the U.S. Capitol on Jan. 6, 2021, in D.C. (Amanda Andrade-Rhoades for The Washington Post)

A coalition of five dozen civil rights organizations is blasting Silicon Valley’s biggest social media companies for not taking more aggressive measures to counter election misinformation on their platforms in the months leading up to November’s midterm elections.

Through memos and meetings, the Change the Terms coalition for months had pleaded with Facebook parent Meta, Twitter, TikTok and YouTube to bolster the content moderation systems that it says allowed Trump’s baseless claims about election rigging to spread, setting the groundwork for the Jan. 6, 2021, riot at the U.S. Capitol,according to interviews and private correspondence viewed by The Washington Post. Now, with less than two months before the general election, coalition members say they’ve seen little action from the platforms.

“There’s a question of: Are we going to have a democracy? …And yet, I don’t think they are taking that question seriously,” said Jessica González, co-chief executive of the media and technology advocacy group Free Press, which is helping to lead the coalition. “We can’t keep playing the same games over and over again, because the stakes are really high.”

Advertisement

YouTube spokeswoman Ivy Choi said in a statement that the company enforces its“policies continuously and regardless of the language the content is in, and have removed a number of videos related to the midterms for violating our policies.”

A statement from TikTok spokeswoman Jamie Favazza said the social media company has responded to the coalition’s questions and values its “continued engagement with Change the Terms as we share goals of protecting election integrity and combating misinformation.”

Twitter spokeswoman Elizabeth Busby said the company was focused on promoting “reliable election information” and “vigilantly enforcing” its content policies. “We’ll continue to engage stakeholders in our work to protect civic processes,” she said.

Facebook spokesman Andy Stone declined to comment on the coalition’s claims but pointed a Post reporter to an August news release listing the ways the company said it planned to promote accurate information about the midterms.

Among the criticisms laid out in the coalition’s memos:

Advertisement
  • Meta is still letting posts that support the “big lie” that the 2020 election was stolen spread on its networks. The groups cited a Facebook post that claims the Jan. 6 Capitol insurrection was a hoax. While TikTok, Twitter and YouTube have banned 2020 election-rigging claims, Facebook has not.
  • Despite Twitter’s ban on disinformation about the 2020 election, its enforcement is spotty. In an August memo, the coalition cited a tweet by Arizona gubernatorial candidate Kari Lake who asked her followers if they would be willing to monitor the polls for cases of voter fraud. “We believe this is a violation of Twitter’s policy against using its services ‘for the purpose of manipulating or interfering in elections or other civic processes,’ ” the coalition wrote.
  • While YouTube has maintained its commitment to police election misinformation in Spanish, the company declined to release data on how well it was enforcing those rules. That issue became particularly contentious in an August meeting between civil rights groups and Google executives including YouTube’s chief product officer, Neal Mohan. This month, the coalition expressed concern in a follow-up memo that the company still wasn’t investing enough resources fighting problematic content in non-English languages.

“The past few election cycles have been rife with disinformation and targeted disinformation campaigns, and we didn’t think they were ready,” González said about the platforms’ election policies. “We continue to see … massive amounts of disinformation getting through the cracks.”

The comments by civil rights activists shed light on the political pressures tech companies face behind the scenes as they make high-stakes decisions about which potentially rule-breaking posts to leave up or take down in a campaign season in which hundreds of congressional seats are up for grabs. Civil rights groups and left-leaning political leaders accuse Silicon Valley platforms of not doing enough to remove content that misleads the public or incites violence during politically cautious times.

Meanwhile, right-leaning leaders have argued for years that the companies are removing too much content — criticisms that were amplified after many platforms suspended former president Donald Trump’s accounts following the Jan. 6 attack on the Capitol. Last week, some conservatives cheered a ruling from the U.S. Court of Appeals for the 5th Circuit that upheld a controversial Texas social media law that bars companies from removing posts based on a person’s political ideology. What the limits are for social media companies is likely to be determined by the U.S. Supreme Court, which was asked Wednesday to hear Florida’s appeal of a ruling from the U.S. Court of Appeals for the 11th Circuit that blocked a state social media law.

Advertisement

The Change the Terms coalition, which includes the liberal think tank Center for American Progress, the legal advocacy group Southern Poverty Law Center andthe anti-violence group Global Project Against Hate and Extremism, among others,has urged the companies to adopt a wider range of tactics to fight harmful content. Those tactics include hiring more human moderators to review content and releasing more data on the number of rule-breaking posts the platforms catch.

In conversations with the companies this spring, the civil rights coalition argued that the strategies the platforms used in the run-up to the 2020 election won’t be enough to protect the against misinformation now.

In April, the coalition released a set of recommendations for actions that the companies could take to address hateful, misinformed and violent content on their platforms. Over the summer, the coalition began meeting with executives at all four companies to talk about which specific strategies they could adopt to address problematic. The groups later sent follow-up memos to the companies raising questions.

Advertisement

“We wanted to kind of almost have like this runway, you know, from April through the spring and summer to move the company,” said Nora Benavidez, a senior counsel and director of digital justice and civil rights at Free Press. The design, she said, was intended to “avoid what is the pitfall that inevitably has happened every election cycle, of their stringing together their efforts late in the game and without the awareness that both hate and disinformation are constants on their platforms.”

The groupsquickly identified what they said were the most urgent priorities facing all the companies and determined how quickly they would implement their plans to fight election-related misinformation. The advocates also urged the companies to keep their election integrity efforts in place through at least the first quarter of 2023, because rule-breaking content “doesn’t have an end time,” the groups said in multiple letters to the tech platforms.

Those recommendations followed revelations in documents shared with federal regulators last year by former Meta product manager Frances Haugen that showed that shortly after the contest, the company had rolled back many of its election integrity measures designed to control toxic speech and misinformation. As a result, Facebook groups became incubators for Trump’s baseless claims of election rigging before his supporters stormed the Capitol two months after the election, according to an investigation from The Post and ProPublica.

Advertisement

In a July meeting with several Meta policy managers, the coalition pressed the social media giant about when the company enforces its bans against voter suppression and promotes accurate information about voting. Meta acknowledged that the company may “ramp up” its election-related policies during certain times, according to Benavidez and González.

In August, the civil rights coalition sent Meta executives a follow-up letter, arguing that the company should take more aggressive actions against “big lie” content as well as calls to harass election workers.

“Essentially, they’re treating ‘big lie’ and other dangerous content as an urgent crisis that may pop up, and then they will take action, but they are not treating ‘big lie’ and other dangerous disinformation about the election as a longer-term threat for users,” Benavidez said in an interview.

The coalition raised similar questions in a June meeting with Jessica Herrera-Flanigan, Twitter’s vice president of public policy and philanthropy for the Americas, and other company policy managers. At Twitter’s request, the activists agreed not to talk publicly about the details of that meeting. But in a subsequent memo, the coalition urged Twitter to bolster its response to content that already appeared to be breaking the company’s rules, citing the Lake tweet. The Lake campaign did not immediately respond to an email seeking comment.

Advertisement

The coalition also criticized the company for not enforcing its rules against public officials, citing a tweet by former Missouri governor Eric Greitens, a Republican candidate for Senate, that showed him pretending to hunt down members of his own party. Twitter applied a label, saying the tweet violated the company’s rules for abusive behavior but left it up because it was in the public interest to remain accessible. The Greitens campaign didn’t immediately respond to an emailed request for comment.

“Twitter’s policy states that ‘the public interest exception does not mean that any eligible public official can Tweet whatever they want, even if it violates the Twitter Rules,’ ” the groups wrote.

The coalition also pressed all the companies to expand the resources they deploy to address rule-breaking content in languages other than English. Research has shown that the tech companies’ automated systems are less equipped to identify and address misinformation in Spanish. In the case of Meta, the documents shared by Haugen indicated that the company prioritizes hiring moderators and developing automated content moderation systems in the United States and other key markets over taking similar actions in the developing world.

The civil rights groups pressed that issue with Mohan and other Google executives in an August meeting. When González asked how the company’s 2022 midterm policies would be different from YouTube’s 2020 approach, she was told that this year the company would be launching an election information center in Spanish.

Advertisement

YouTube also said the company had recently increased its capacity to measure view rates on problematic content in Spanish, according to González. “I said, ‘Great. When are we are going to see that data?’ ” González said. “They would not answer.” A YouTube spokesperson said the company does publish data on video removals by country.

In a follow-up note in September, the coalition wrote to the company that its representatives had left the meeting with “lingering questions” about how the company is moderating “big lie” content and other types of problematic videos in non-English languages.

In June, civil rights activists also met with TikTok policy leaders and engineers who presented a slide deck on their efforts to fight election misinformation, but the meeting was abruptly cut short because the company used a free Zoom account that only allotted around 40 minutes, according to González. She added that while the rapidly growing company is staffing up and expanding its content moderation systems, its enforcement of its rules is mixed.

In an August letter, the coalition cited a post that used footage from the far-right One America News to claim that the 2020 election was rigged. Their letter goes on to argue that the post, which has since been removed, broke TikTok’s prohibition against disinformation that undermines public trust in elections.

“Will TikTok commit to enforcing its policies equally?” the groups wrote.

Loading...

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK