Random Image Display on Page Reload

It’s Getting Harder for the Government to Secretly Flag Your Social Posts

Jul 24, 2023 7:00 AM

It’s Getting Harder for the Government to Secretly Flag Your Social Posts

Social apps prioritize content moderation tips from governments and online watchdogs. A US court ruling and a new EU law could restrict the practice, but they still leave loopholes.

red tape

Photograph: MirageC/Getty Images

While Israeli police smothered Palestinian protests on the streets of East Jerusalem in May 2021, a separate agency attempted its own sweep online. The Cybercrime Department in Israel’s Ministry of Justice sent social media companies lists of thousands of user accounts it wanted removed for violating the services’ content policies with their posts about the protests.

A former Twitter employee says the company suspended a few of the accounts flagged by the Israeli agency for using hateful or harassing language. But policy staffers determined that most were simply Palestinians and others tweeting comments that, while critical of Israel, did not break any rules.

The Israeli cyber department is an example of what scholars of online platforms call an internet referral unit—a government team created to badger online services into taking action against content it doesn’t like. A raft of IRUs have been launched by countries across the world as governments of all kinds grapple with online platforms. Tech companies often prioritize IRU requests in moderation queues, to the concern of critics who say the units can reflect political motivations and often skirt legal hurdles designed to prevent unfair censorship.

After a decade of growing mostly unrestricted, internet referral units are now facing new checks and balances in the US and the EU.

A federal judge in Louisiana this month issued a preliminary injunction banning 41 Biden administration officials and their staff across 10 different US agencies from tipping off social media companies about content thought to violate a service’s terms of use. The ban severely curtailed the White House’s influence on digital town squares through the agencies’ informal IRUs, and it caused the State Department to postpone a planned meeting with Meta to share information on countering disinformation abroad.

In the EU, referral units are set to be subjected to new transparency requirements by one provision of the wide-ranging Digital Services Act that takes effect next year.

The ruling and the law are the first significant disruptions to the coziness that has existed between online platforms and the government agencies and other organizations patrolling the web to quietly suppress unfavorable commentary. But though rights groups that promote freedom of expression have applauded the new interventions, they also warn that IRUs and the moderation decisions they prompt will largely be allowed to continue without adequate controls or disclosures.

Internet referral units first emerged around 2010 in the UK, as services such as Facebook and YouTube faced pressure from counterterrorism officials to better handle content generated by violent Islamic extremists. Companies trying to establish better relations with governments generally accepted the requests and even anointed IRUs as “trusted flaggers,” whose reports of bad content would get reviewed more swiftly than those of standard users.

The numbers and activity of IRUs expanded rapidly. Companies also added civil society organizations as trusted flaggers. Authorities in countries including Germany and France used the tactic to suppress far-right political extremism on social media in the later 2010s, and then health disinformation during the pandemic.

Most Popular

Referral units are not always formal or well-organized entities, and their remits vary, but a common process has become established: Choose a topic to monitor, such as political misinformation or anti-Semitism, trawl for problematic content, and then flag it to companies through dedicated hotlines, physical letters, personal relationships with insiders, or the “report this” buttons available to all users. The units may report solely what appears to be criminal activity, but some flag content that is legal but banned under a platform’s rules, like nudity or bot accounts.

More often than not, experts say, compliance by platforms is voluntary because the requests are not legally binding; users are generally not informed of who reported their content. Rights groups have long expressed concern that IRUs effectively circumvent legal processes, trading speed and simplicity for transparency and checks on the abuse of power—while also pushing reports from users to the back of the line.

Social media companies can feel significant pressure to act on IRU requests because fighting them could lead to regulations that raise the costs of doing business, according to several experts and four former tech company policy staffers who have handled demands. It’s common for politicians and influential groups to request direct channels to escalate concerns about content, and for platforms to provide them.

Power balances established offline get reflected in the programs. The Palestinian Authority, one of the small governing groups at odds with Israel, “does not have the leverage or relationship with Meta to operate an effective IRU,” says Eric Sype of the Palestinian rights group 7amleh. Meta, TikTok, and Twitter did not respond to requests for comment for this story, and YouTube declined to comment.

IRUs have been challenged before. In 2021, the year Israel clashed with protestors in East Jerusalem and pinged companies including Twitter, the country’s Supreme Court ruled against a challenge to the Justice Ministry’s unit. The court called work “crucial to the national security and social order,” and it allowed it to continue because plaintiffs couldn’t demonstrate direct harm. That year, the IRU ultimately sent nearly 6,000 requests to tech companies, including over 1,300 to Twitter, for voluntary removal or restriction of content such as “praise of terrorism” and Covid vaccine misinformation, according to the Israeli government’s annual disclosures and a US State Department analysis. Almost 5,000 requests were granted, the data show. Israel’s embassy in Washington, DC, did not respond to a request for comment.

Meta’s independent Oversight Board, an appeals body for thorny content moderation issues, has also pushed back. The company last year had removed a UK drill-music track at the request of London police over concern that the song’s reference to a shooting could incite violence. The board overturned the takedown, saying Meta lacked sufficient evidence of a credible threat, and chastised the company for accepting informal law enforcement requests in “a haphazard and opaque” manner. It called for Meta to publicize all such requests, a plea that Oversight Board cochair Michael McConnell and member Suzanne Nossel, repeated in separateop-eds this month. Meta on its website says it is working on doing so but isn't sure how long it will take, because centralizing all the requests is complicated.

The US federal court ruling this month that banned agency officials from making takedown pleas dealt the biggest blow yet to IRUs. It came after two conservative-led states and several social media users filed a lawsuit alleging that the White House was violating the First Amendment’s protection against government censorship by pressuring Facebook and Twitter to place advisory labels on posts and suspend or ban accounts. The disputed content questioned Covid face-masking, vaccines, virus origins, and lockdowns.

Judge Terry Doughty ruled that the plaintiffs were likely to succeed in proving that a bombardment of takedown requests by emails and calls from White House and federal agency officials forced the social media companies’ hands, amounting to a practice known as jawboning. He accused the administration of targeting “disfavored conservative speech” and pointed to officials’ informal but sometimes intense emailed demands. One said: “Cannot stress the degree to which this needs to be resolved immediately. Please remove this account immediately.”

Most Popular

Wrote Doughty, “Defendants ‘significantly encouraged’ the social-media companies to such extent that the decisions (of the companies) should be deemed to be the decisions of the government.”

Doughty’s ban, which is now on hold as the White House appeals, attempts to set the bounds of acceptable conduct for government IRUs. It provides an exemption for officials to continue notifying social media companies about illegal activity or national security issues. Emma Llansó, director of the Free Expression Project at the Center for Democracy & Technology in Washington, DC, says that leaves much unsettled, because the line between thoughtful protection of public safety and unfair suppression of critics can be thin.

The EU’s new approach to IRUs also seems compromised to some activists. The Digital Services Act (DSA) requires each EU member to designate a national regulator by February that will take applications from government agencies, nonprofits, industry associations, or companies that want to become trusted flaggers that can report illegal content directly to Meta and other medium-to-large platforms. Reports from trusted flaggers have to be reviewed “without undue delay,” on pain of fines of up to 6 percent of a company’s global annual sales.

The law is intended to make IRU requests more accurate, by appointing a limited number of trusted flagging organizations with expertise in varying areas of illegal content such as racist hate speech, counterfeit goods, or copyright violations. And organizations will have to annually disclose how many reports they filed, to whom, and the results.

But the disclosures will have significant gaps, because they will include only requests related to content that is illegal in a EU state—allowing reports of content flagged solely for violating terms of service to go unseen. Though tech companies are not required to give priority to reports of content flagged for rule breaking, there’s nothing stopping them from doing so. And platforms can still work with unregistered trusted flaggers, essentially preserving the obscure practices of today. The DSA does require companies to publish all their content moderation decisions to an EU database without “undue delay,” but the identity of the flagger can be omitted.

“The DSA creates a new, parallel structure for trusted flaggers without directly addressing the ongoing concerns with actually existing flaggers like IRUs,” says Paddy Leerssen, a postdoctoral researcher at the University of Amsterdam who is involved in a project providing ongoing analysis of the DSA.

Two EU officials working on DSA enforcement, speaking on condition of anonymity because they were not authorized to speak to media, say the new law is intended to ensure that all 450 million EU residents benefit from the ability of trusted flaggers to send fast-track notices to companies that might not cooperate with them otherwise. Although the new trusted-flagger designation was not designed for government agencies and law enforcement authorities, nothing blocks them from applying, and the DSA specifically mentions internet referral units as possible candidates.

Rights groups are concerned that if governments participate in the trusted flagger program, it could be used to stifle legitimate speech under some of the bloc’s more draconian laws, such as Hungary’s ban (currently under court challenge) on promoting same-sex relationships in educational materials. Eliška Pírková, global freedom of expression lead at Access Now, says it will be difficult for tech companies to stand up to the pressure, even though states’ coordinators can suspend trusted flaggers deemed to be acting improperly. “It’s the total lack of independent safeguards,” she says. “It’s quite worrisome.”

Twitter barred at least one human rights organization from submitting to its highest-priority reporting queue a couple of years ago because it filed too many erroneous reports, the former Twitter employee says. But dropping a government certainly could be more difficult. Hungary’s embassy in Washington, DC, did not respond to a request for comment.

Tamás Berecz, general manager of INACH, a global coalition of nongovernmental groups fighting hate online, says some of its 24 EU members are contemplating applying for official trusted flagger status. But they have concerns, including whether coordinators in some countries will approve applications from organizations whose values don’t align with the government’s, like a group monitoring anti-gay hate speech in a country like Hungary, where same-sex marriage is forbidden. “We don’t really know what’s going to happen,” says Berecz, leaving room for some optimism. “For now, they are happy being in an unofficial trusted program.”

Get More From WIRED

Paresh Dave is a senior writer for WIRED, covering the inner workings of big tech companies. He writes about how apps and gadgets are built and about their impacts, while giving voice to the stories of the underappreciated and disadvantaged. He was previously a reporter for Reuters and the Los Angeles Times,… Read more
More from WIRED

AI Giants Pledge to Allow External Probes of Their Algorithms, Under a New White House Pact

Leading AI developers including Google and OpenAI promised the Biden administration to check for problems such as biased output. The agreement is not legally binding.

Khari Johnson

WhatsApp Made a Movie About Afghan Women's Soccer

As the UK pushes for a law that threatens end-to-end encryption, WhatsApp has given itself a starring role in a doc about a girls’ soccer team fleeing the Taliban.

Morgan Meaker

Instagram Posts About a 17th-Century King Are Getting People Arrested

Right-wing groups in India are policing social media and reporting minorities for allegedly offending their religion.

Parth M.N.

Meta Just Proved People Hate Chronological Feeds

Some social media users and lawmakers say chronological feeds are healthier. A new study found that Facebook and Instagram users who were forced to see time-ranked posts turned to TikTok instead.

Paresh Dave

Meta’s Threads Could Make—or Break—the Fediverse

Meta promised to make Threads compatible with the decentralized protocol underlying Mastodon. Proponents of interoperable social media can’t agree whether to welcome or fear it.

Gregory Barber

Big AI Won’t Stop Election Deepfakes With Watermarks

Experts warn of a new age of AI-driven disinformation. A voluntary agreement brokered by the White House doesn’t go nearly far enough to address those risks.

Vittoria Elliott

The EU Urges the US to Join the Fight to Regulate AI

On his way to meeting US officials, the EU’s justice chief, Didier Reynders, tells WIRED the US must deliver on talk of tighter regulation on tech: “Enforcement is of the essence.”

Paresh Dave

How Threads Could Kill Twitter

Meta’s microblogging app is intuitive, has already been downloaded by millions of people, and has other advantages over Twitter’s would-be rivals.

Amanda Hoover

*****
Credit belongs to : www.wired.com

Check Also

What are microplastics doing to human health? Scientists work to connect the dots

People unknowingly ingest microplastics from what we eat, drink and breathe. Some scientists fear exposure …