
Canadian tech company Telus International is set to lay off more than 2,000 workers in Barcelona after Meta Platforms Inc. abruptly ended a major content moderation contract, according to a statement from Spanish union CCOO on Monday.
Telus, which has been handling content moderation for Facebook and Instagram under the name Barcelona Digital Services, informed employees during a morning meeting that it would terminate all positions related to Meta’s content moderation tasks affecting 2,059 workers in total.
The mass redundancy follows Meta’s decision to cut back on third-party content moderation and fact-checking operations, particularly in the United States, as part of a broader restructuring of its approach to harmful and misleading content.
The union confirmed it had reached a preliminary agreement with Telus to ensure the “highest possible legal compensation” for affected employees.
Although Telus declined to confirm the exact number of job cuts, a company spokesperson said its focus remains on supporting impacted team members, including through relocation opportunities.
Telus has worked with Meta since 2018, providing outsourced teams to screen user-generated content for potential violations of platform rules around hate speech, misinformation, and other sensitive topics.
However, Meta has significantly scaled back these efforts, opting for a user-driven reporting model instead of proactively scanning content. The shift aligns with recent remarks by CEO Mark Zuckerberg, who argued that U.S.-based fact-checking had become “too politically biased,” eroding public trust.
In January, Zuckerberg announced plans to replace American fact-checkers with a community-driven system similar to “Community Notes” on X (formerly Twitter), a platform owned by billionaire and political ally of Donald Trump, Elon Musk.
“Fact-checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the U.S.,” Zuckerberg said.
Meta’s new policy also means it will no longer proactively scan for hate speech or misinformation, responding only when users flag such content themselves. This mirrors long-standing criticisms from conservatives and some tech leaders who claim that content moderation has been weaponized to stifle free speech a view rejected by professional fact-checkers and many watchdog groups.
Despite the global shift in its approach, Meta continues to maintain partnerships with fact-checking organizations in other regions, including one with AFP, which handles verification across Asia-Pacific, Europe, Latin America, the Middle East, and Africa.