Icy Tales

The Revolving Door: How Discord’s Reliance on Contract Moderators Is Breaking the Promise of a Safe Platform

Joshita
By
30 Min Read

Post Author

There is a moment that every longtime Discord user knows. You report something genuinely horrible, something graphic or threatening or predatory, and you wait. Maybe days pass. Maybe you get an automated reply that tells you nothing. Maybe the person you reported is still there, posting freely, while somewhere across the world, a contractor you will never know about briefly glanced at a ticket and moved on to the next one in a queue of thousands. That moment of waiting is not an accident. It is, in a very real sense, the product.

Discord is one of the most consequential platforms in the world and one of the least understood by people outside its own ecosystem. With over 200 million monthly active users, it has grown far beyond its origins as a gaming chat tool. It is where teenagers form their social lives, where extremists recruit, where communities organize, and where harm can escalate with terrifying speed. And at the center of the platform’s effort to police all of this sits a trust and safety apparatus that is smaller, more fragile, and more reliant on outside contractors than most users have ever been told.

Discord's reliance on contract moderators impacting platform safety and user experience.

This looks at what happens when you build platform enforcement around a workforce that, by design, turns over. It examines the contractors, the burnout, the institutional knowledge that vanishes every time a new cohort cycles in, and the users who bear the cost of that inconsistency. It is, ultimately, a story about what safety looks like when it is treated as an expense rather than a commitment.

The Numbers Discord Would Rather You Not Think About

In early 2024, Discord disclosed something remarkable in its written responses to the United States Senate Judiciary Committee. The company revealed that its dedicated trust and safety team had grown from just 22 full-time employees in 2019 to 90 by 2023. Then came the layoffs. After cutting 17 percent of its overall workforce in January 2024, the team shrank back to 74 full-time employees. That disclosure came from NBC News1, one of the few places where the actual staffing numbers received sustained attention.

What makes that figure particularly striking is what sits alongside it. Discord told senators it had supplemented its in-house team with what it described as “400 additional contract agents, including external, virtual Special Operations Center, and other support functions.” In other words, for every full-time safety employee, there are roughly five contractors doing much of the heavy lifting. The ratio matters. Full-time employees build institutional knowledge. They understand the edge cases, the evolving tactics of bad actors, and the cultural context of communities they moderate. Contractors, cycling through in waves, often do not.

The layoffs affected the trust and safety staff directly. According to reporting from Cryptopolitan2, the January 2024 cuts hit “engineers, product managers, data scientists, and members of the trust and safety team.” Discord CEO Jason Citron framed it all as a correction for over-hiring during the pandemic years. The company had grown its workforce by five times since 2020. The consequences of shrinking it again would be felt unevenly, and the trust and safety function, never the most visible or celebrated part of a tech company, was among those that paid.

The Contractors Nobody Talks About

The outsourcing of content moderation is a story that has been told most vividly in relation to Facebook and TikTok, where investigations by journalists at Time magazine and The Bureau of Investigative Journalism documented conditions that bordered on labor exploitation. Workers in Colombia contracted through Teleperformance to moderate TikTok content were found to be earning as little as the equivalent of ten dollars a day while processing graphic violence, sexual abuse material, and political executions under punishing productivity quotas. Time Magazine‘s3 investigation into Teleperformance and TikTok triggered a formal government investigation in Colombia.

The same structural architecture shapes Discord’s contractor workforce, even if the pay rates and working conditions differ by geography and vendor. Discord has been linked to Teleperformance as well as Keywords Studios, a Dublin-based agency that has worked on Discord’s content moderation. The blog PNLY, which covers Discord’s platform mechanics in unusual depth, noted bluntly that “the flaws that come with outsourcing are clear, agents aren’t Full-Time Employees and don’t have the same rigorous guarantees that vetted employees would, so they may be more mistake-prone when handling reports.”

Teleperformance describes itself as a market leader in content moderation services. It employs more than 7,000 trust and safety workers globally. It has moderated for social media platforms, including TikTok and Meta. Its operations in West Africa have recently come under renewed scrutiny: a 2025 investigation by The Bureau of Investigative Journalists4 found that after Meta’s contract with Sama in Kenya produced lawsuits and PTSD diagnoses among former workers, Meta shifted to Teleperformance in Ghana, where conditions were described as even worse. Low pay, punishing targets, inadequate mental health support, and workplace surveillance were the recurring allegations.

Illustration of a revolving door symbolizing Discord's reliance on contract moderators.

The global structure of the content moderation industry tends toward these outcomes by design. As one detailed account of the industry by Justin Brown5 put it, the chain typically runs like this: a tech platform contracts a Business Process Outsourcing firm, which sets up in a country with high English proficiency and low labor costs, hires workers on short-term contracts through sub-contractors, and creates “layers of legal insulation between the person watching a beheading video and the corporation whose product required it.” Discord, like every other platform in this space, benefits from that insulation.

What Turnover Actually Does to Enforcement

There is a tendency in tech company communications to talk about content moderation as if it were a purely mechanical process. Bad content goes in, and an enforcement action comes out. The policy is the policy. But anyone who has spent time studying how moderation actually works knows that this is wrong. Moderation is interpretive. It requires judgment. And judgment depends on experience, on context, on an accumulating understanding of how a specific community behaves and how bad actors within it operate.

When contractors cycle out every six to eighteen months, they take all of that context with them. The next wave starts fresh. They learn the written policies. They may not learn the unwritten ones, the enforcement precedents, the community-specific context that makes the difference between a harmful post and an edgy joke. The result is not just gaps in enforcement. It is the inconsistency that users experience as arbitrary.

On Discord’s6 own community forums and support pages, users have documented this experience in exhaustive, frustrated detail. One user posted in late 2023:

“I genuinely don’t get how Discord, a platform with millions of global users, can constantly have some of the worst enforcement of moderation against actual rule breakers, and then ban innocent people out of the blue without so much of a reason except ‘You violated TOS’ with no explanation.”

Discord moderation screen showing contract moderators managing user appeals.

A 2024 report by the Anti-Defamation League found something even more pointed. In a study testing platform responses to posted hate speech and extremist content, Discord did not take down any content at all following the posting of antisemitic material. The ADL7 concluded this implied “either that it has little to no automatic filtering or that any filtering is not robust enough to catch” what had been posted. The content was only removed after manual reports were filed, and even then, the response was slower and less comprehensive than on Facebook.

The appeal process compounds the problem. Discord’s own transparency data8 for January through June 2024 showed that the platform restored only 1,705 accounts to their pre-action status out of 65,831 accounts that submitted appeals. That is a reinstatement rate of roughly 2.59 percent. Stated another way, when a user believes they have been wrongly actioned and appeals, they have a one-in-forty chance of getting their account back. Whatever noise is in the enforcement system accumulates and is rarely corrected.

The Psychological Arithmetic of Moderation Work

Content moderation is among the most psychologically demanding forms of labor in the digital economy. The research is consistent on this. A study published in the journal Cyberpsychology9 found that commercial content moderators “manifested with a range of symptoms consistent with experiencing repeated trauma.” These included intrusive thoughts triggered by situations reminiscent of content seen at work, avoidance behaviors, anxiety, detachment, and cynicism. Researchers found the well-being risks for content moderators “may be comparable to professionals working in the emergency services or caring professions, such as social workers.”

The relationship between that psychological burden and the turnover problem is direct. Research on BPO trust and safety roles by Zevohealth10 has found that high turnover rates drive “significant costs associated with recruitment, training, and loss of institutional knowledge.” Studies cited in that research suggest that replacing an employee costs between half and double their annual salary. In high-stress industries like content moderation, those costs accumulate fast. But the less quantifiable cost, the one that users bear, is the loss of expertise that leaves with every departing worker.

A report by the Global Trade Union Alliance of Content Moderators11, published in June 2025, surveyed moderators across six countries. The findings were stark. Workers described productivity quotas that had more than doubled in a single year.

“We have to watch videos running at double or triple speed, just to keep up. There’s no time to think. No time to process,” said one moderator in Tunisia.

The report argued that even as AI automation has expanded, it has not reduced human exposure to traumatic content. Instead, it has concentrated it: humans review the material that algorithms cannot classify, which is typically the most disturbing, most context-dependent, and most judgment-intensive content in the queue.

This concentration is important. When a contractor is reviewing the hardest cases under the most pressure, with insufficient training and inadequate mental health support, and then leaves the job within eighteen months, the systemic effect on enforcement quality is not marginal. It is structural.

Inside the Opaque Machine

One of the more candid looks inside Discord’s moderation thinking came from a piece by journalist Casey Newton at Platformer12, who was given access to an internal meeting at Discord’s trust and safety team. The account is worth sitting with. A product policy specialist described the platform’s enforcement approach, without apparent irony, as “a fascinating case of over- and under-enforcement.” The team was debating how to handle servers where harm had occurred, but the owner was not necessarily the responsible party. Discord, it turned out, does not have “a totally consistent definition of who counts as an active moderator” in a given server.

This definitional instability ripples outward. If Discord cannot consistently define who is responsible within a server for moderation purposes, it becomes very hard to apply consistent enforcement across servers. And without enforcement consistency, the platform’s rules function less as firm standards and more as suggestions that are enforced unevenly depending on which contractor processes a report, on which day, under what workflow pressure.

The disappearance of the dedicated Trust and Safety reporting category from Discord’s support page in July 2023 added another layer of user frustration. The Discord wiki13 noted that the removal “caused confusion and frustration among members of the Discord community who now find themselves searching for alternative methods to submit reports.” That change happened in the same year that Discord’s in-house trust and safety headcount peaked. The coincidence invites a question worth sitting with: was the reduction in user-facing reporting infrastructure a preparation for reduced internal capacity, or were the two unrelated?

Discord has not said. The company did not respond to requests for comment for this article.

The Scale Problem Nobody Is Solving

To give Discord its due, the volume problem it faces is genuinely enormous. The platform received millions of reports in just the first half of 2024. No workforce of 74 full-time employees handles that volume. No workforce of 400 contractors does either, not without automation doing most of the initial triage.

This is why Discord, like every platform of its scale, leans heavily on automated systems for initial detection and triage. And those systems are not neutral. They have patterns of failure. Over-enforcement in some communities. Under-enforcement in others. Language and cultural gaps that produce different outcomes for English-language content versus content in other languages. Research from health and safety consultancy Zevo Health14 has noted that AI moderation systems show “demographic bias and language gaps, under-enforcing harmful content in non-English regions while over-enforcing benign posts elsewhere.”

The interaction between automated triage and contractor review is where consistency breaks down most visibly. An algorithm flags content. A contractor reviews the flag. The contractor makes a judgment call based on their training, their fatigue level, their cultural frame of reference, and the quota pressure of their shift. A different contractor, on a different shift, in a different BPO facility, might make a different call on identical content. This is not a hypothetical. This is how the system works.

A user on Discord’s15 own community forums described this experience with the bluntness of someone who has been through it: “considering there hasn’t really been human responses for some strange reason, it’s likely contributing to that factor still too”, a comment posted in July 2023 during a wave of account bans that many users believed were not properly reviewed. The comment captured something true about how the platform’s backend looks from the outside: you know there are humans somewhere in the system, but you cannot tell when, or who, or by what standard they are judging your case.

Discord moderation team reviewing user activity for community guideline compliance.

The Structural Incentives Nobody Wants to Name

Here is the uncomfortable arithmetic that sits beneath all of this. Hiring and retaining 400 experienced, well-compensated trust and safety specialists in-house would cost Discord, at a minimum, tens of millions of dollars per year in additional payroll, benefits, and infrastructure. Contracting that work to BPOs in lower-cost markets costs a fraction of that. The quality difference, the enforcement consistency difference, is real but hard to quantify in a way that shows up cleanly on a balance sheet.

Discord is not alone in making this calculation. It is the same calculation every platform makes. But Discord’s demographics make it particularly consequential. A 2023 Pew Research16 study found that a third of teen boys in the United States use Discord. The platform has been explicitly named by the Department of Homeland Security as a space where foreign terrorist organizations have attempted to radicalize minors. According to The Hill17, a 2024 school shooter in Iowa had posted warnings on Discord before the attack. These are not edge cases. They are the cost of an enforcement system that moves slowly, inconsistently, and with limited institutional memory.

The market research firm Everest Group projects that the trust and safety services market will reach eleven billion dollars in size. That growth is driven by demand for outsourced enforcement from platforms that cannot or will not build the capability in-house. The Trust and Safety Market Research Report18 produced by Duco noted that although outsourced services providers employ the vast majority of trust and safety workers, “the higher ratio of T&S software solutions reflects the growing early-stage investment” in automation as an alternative. The direction of travel is toward more automation, not more human expertise.

This is a problem. Content moderation research consistently shows that the cases automation handles worst are the ones that require the most human judgment: culturally specific harassment, coordinated manipulation, context-dependent hate speech, gray-area content that sits at the edge of policy. Those are exactly the cases where experienced, stable, well-supported human reviewers make the biggest difference. That is also, not coincidentally, where contractor turnover does the most damage.

What Better Looks Like

It is worth being precise about what the alternative is, because it is not simply “hire more people.” The research on improving content moderation outcomes points toward a combination of structural changes: longer contractor tenures supported by better pay and psychological care, robust institutional knowledge transfer systems, clearer escalation pathways, and genuine transparency about how enforcement decisions are made and reviewed.

On the psychological support side, there is at least growing formal recognition of the problem. The Global Trade Union Alliance of Content Moderators has now ratified a set of protocols calling for exposure limits, trauma-informed training, accessible counseling, and realistic productivity quotas. The alliance’s demands, backed by unions from nine countries, represent the first serious attempt to set industry-wide standards for moderator welfare. They are not binding. Platforms like Discord are under no legal obligation to follow them. But they establish a benchmark against which corporate behavior can be measured.

On the transparency side, Discord’s bi-annual transparency reports represent a genuine effort. They provide enforcement statistics broken down by category, which allows researchers and journalists to track trends over time. But a California TOS disclosure form19 reviewed for this piece noted a revealing gap: Discord stated it was unable to disaggregate its enforcement actions “within the broader category” or to provide detail on whether specific actions were “actioned by company employees or contractors, actioned by artificial intelligence software, actioned by community moderators.” In other words, the transparency reports tell us what happened, not who decided it, or by what process, or under what constraints.

That gap is not an accident. It reflects a platform that has structured its enforcement around accountability-insulating layers. The contractor reviews the ticket. The BPO employs the contractor. Discord contracts with the BPO. When something goes wrong, and things go wrong every day at scale, it becomes genuinely difficult to assign responsibility within that chain.

The Users Left Waiting

The most honest accounting of this system comes from the people who interact with it when something bad has happened to them. A professional game moderator with eight years of experience posted a description of trying to report a user on Discord that remains, years later, one of the clearer accounts of what the system feels like from the outside. In a 2019 post on Discord’s20 own support forum, they wrote:

“It is abnormal that the Discord moderation team appears to be working out of Zendesk or some kind of ticket system rather than an actual report system to handle TOS violations. Perhaps this is currently cheaper for the company, but it means a lot of case information is potentially lost, and a lot more is put on the user to accurately report cases that most reporters are not going to be experienced in documenting.”

Discord moderation system struggles with reliance on contract moderators.

That post is from 2019. Much of it still applies in 2026. The fundamental architecture, contractor-heavy, ticket-driven, reactive by default, has not changed in any meaningful structural sense. Discord has added features and invested in child safety specifically. But the underlying question of whether a platform with hundreds of millions of users has the institutional infrastructure to enforce its own rules consistently, fairly, and at speed remains unanswered.

A Discord moderator issue post from February 2024 captured the lived experience of this with uncomfortable clarity. The author wrote of experiencing what they described as “anarchic moderation across servers” driven by people who “simply aren’t mentally capable or mature enough” to manage large communities. That post was directed at server-level moderation, not platform-level enforcement, but the two failures compound each other. Discord’s strategy, partly by design and partly by resource constraint, has been to push enforcement responsibility down to server owners. When those owners fail, as they often do, the platform-level backstop is a contractor in a queue.

Accountability at Scale

I keep coming back to a number from Discord’s first transparency report, published years ago now. The company noted that when it received a report, the team took a record of any action. It also said that “for many reasons we generally can’t share” the specifics of what action was taken. That opacity was framed as a privacy protection, and, to be fair, there are legitimate privacy reasons for it. But it has also functioned as a shield. Users who experience what feels like arbitrary enforcement have almost no mechanism for understanding whether their experience reflects a policy failure, a contractor error, an automated false positive, or something else entirely.

That opacity creates a system where the burden of error falls almost entirely on users. The platform’s contractors make a decision. The platform’s automated system upholds it. The user submits an appeal. The appeal is denied. The account stays disabled. The user creates a new account and starts over, or they leave. No one at Discord is required to explain why.

The European Union’s Digital Services Act is beginning to change this, at least in Europe. Under DSA requirements, Discord21 now publishes a database of enforcement actions for EU users, including information about the basis for each action. That database represents the most granular public record of Discord enforcement that exists anywhere. It does not exist for users outside the EU. It is the product of regulatory compulsion, not voluntary transparency.

What the DSA database reveals, to anyone willing to look, is the sheer volume and variety of enforcement actions Discord takes. It also makes visible the inconsistencies that users have been reporting for years. The same type of content, actioned differently depending on who reviewed it, when, and under what policy version. Patterns that suggest shifts in enforcement priorities rather than the stable application of consistent standards.

The Cost of the Bargain

Discord built something genuinely valuable: a platform architecture that allows communities to self-organize around almost any interest, at almost no cost to participants. That architecture has also created the conditions for real harm, and the platform’s response to that harm has been structurally constrained by a funding model that never fully priced in the cost of proper enforcement.

The contractor workforce is not a conspiracy. It is a rational response to an impossible volume problem on a lean budget. But rational at the level of the balance sheet does not mean acceptable at the level of platform safety. When the people making enforcement decisions are underpaid, over-exposed to traumatic content, cycling out every year, and operating without the institutional knowledge that makes enforcement consistent, you do not have a trust and safety system. You have the appearance of one.

The difference matters most to the people who need the system to work: the minor targeted by a predator, the community moderator reporting coordinated harassment, the user who got caught in an automated false positive with no meaningful recourse. For them, the revolving door of contractor enforcement is not an operational detail. It is the reason help did not come when they needed it.

According to NBC News22, Discord’s CEO Jason Citron testified before the Senate Judiciary Committee in January 2024 that Discord uses “a mix of proactive and reactive tools to enforce its terms of service and community guidelines.” That is true as far as it goes. The question is whether that mix is calibrated to the actual scale and character of harm on the platform. Based on the evidence available, the answer is that it is not. And the contractor turnover problem sits near the center of why.

Until platforms like Discord treat enforcement consistency as a design requirement rather than a cost center, the people who depend on those systems will keep waiting. And the queue, somewhere in a BPO facility they will never see, will keep moving.

Sources

  1. Goggin, Ben. “Big Tech companies reveal trust and safety cuts in disclosures to Senate Judiciary Committee” 29 Mar. 2024, www.nbcnews.com/tech/tech-news/big-tech-companies-reveal-trust-safety-cuts-disclosures-senate-judicia-rcna145435. Accessed 5 Apr. 2026. ↩︎
  2. Cryptopolitan, www.cryptopolitan.com/discord-announces-major-layoffs/. Accessed 5 Apr. 2026. ↩︎
  3. Times, time.com/6231625/tiktok-teleperformance-colombia-investigation/. Accessed 5 Apr. 2026. ↩︎
  4. Jackson, Jasper. “Social media moderators’ lives are getting worse. Big Tech needs to take responsibility” TBIJ, www.thebureauinvestigates.com/stories/2025-04-27/social-media-moderators-lives-are-getting-worse.-big-tech-needs-to-take-responsibility. Accessed 5 Apr. 2026. ↩︎
  5. Justin Brown, siliconcanals.com/sc-d-i-spent-a-year-inside-the-content-moderation-workforce-in-nairobi-and-manila-the-human-cost-of-making-ai-safe-is-a-class-story-nobody-wants-to-tell/. Accessed 5 Apr. 2026. ↩︎
  6. Discord, support.discord.com/hc/en-us/community/posts/19356890741655-Horrible-Moderation-and-Appeals. Accessed 5 Apr. 2026. ↩︎
  7. “Private Online Spaces Pose Serious Content Moderation Challenges” www.adl.org/resources/report/private-online-spaces-pose-serious-content-moderation-challenges. Accessed 5 Apr. 2026. ↩︎
  8. “[web] Discord Jan_Jun 2024 Transparency Report” cdn.prod.website-files.com/625fe439fb70a9d901e138ab/cdn.prod.website-files.com/625fe439fb70a9d901e138ab/67056a054d453d30491c1ac9_Discord%20Jan_Jun%202024%20Transparency%20Report.pdf. Accessed 5 Apr. 2026. ↩︎
  9. Spence, Ruth. “The psychological impacts of content moderation on content moderators: A qualitative study | Cyberpsychology: Journal of Psychosocial Research on Cyberspace” cyberpsychology.eu/article/view/33166. Accessed 5 Apr. 2026. ↩︎
  10. O, “Tara. “Risks of Overlooking Wellbeing in BPO Trust and Safety Roles” 1 July 2025, www.zevohealth.com/blog/the-5-risks-of-ignoring-wellbeing-when-choosing-a-bpo-in-trust-and-safety/. Accessed 5 Apr. 2026. ↩︎
  11. “Global content moderators alliance demands Mental Health Protocols in Tech supply chain” uniglobalunion.org/news/tech-protocols/. Accessed 5 Apr. 2026. ↩︎
  12. Newton, Casey. “Inside Discord’s reform movement for banned users” 20 Oct. 2023, www.platformer.news/inside-discords-reform-movement-for/. Accessed 5 Apr. 2026. ↩︎
  13. Discord Wiki, discord.fandom.com/wiki/Trust_and_Safety. Accessed 5 Apr. 2026. ↩︎
  14. Teo, Dr. Michelle. “The Mental Health Impacts of AI-Driven Content Moderation” 9 Dec. 2025, www.zevohealth.com/blog/the-mental-health-impacts-of-ai-driven-content-moderation/. Accessed 5 Apr. 2026. ↩︎
  15. Discord, support.discord.com/hc/en-us/community/posts/16018777971095-Account-disabled-for-alleged-TOS-and-Community-Guidelines-Violation. Accessed 5 Apr. 2026. ↩︎
  16. Collier, Kevin. “Discord used by extremists to recruit US youth, officials warned” 24 Sept. 2025, www.nbcnews.com/tech/security/discord-messages-server-dhs-terror-extremist-charlie-kirk-fbi-patel-rcna232377. Accessed 5 Apr. 2026. ↩︎
  17. The Hill, thehill.com/homenews/state-watch/4942350-iowa-school-shooter-likely-displayed-warning-signs-before-january-attack-report-finds/. Accessed 10 Apr. 2026. ↩︎
  18. Chong, Vivian. 14 Mar. 2024, duco-public-static-assets.s3.amazonaws.com/duco-public-static-assets.s3.amazonaws.com/Duco+TnS+MRR-+FINAL.pdf. Accessed 5 Apr. 2026. ↩︎
  19. OAG, oag.ca.gov/sites/default/files/oag.ca.gov/sites/default/files/2024.04.01%20Discord%2023Q4%20CA%20TOS%20Report.pdf. Accessed 5 Apr. 2026. ↩︎
  20. Discord, support.discord.com/hc/en-us/community/posts/360043242771-Discord-Moderation-report-infrastructure. Accessed 5 Apr. 2026. ↩︎
  21. Gikow, Stephen. “How We’re Evolving Our Safety Architecture For The Digital Services Act” 27 Feb. 2025, discord.com/blog/evolving-our-safety-architecture-for-the-digital-services-act. Accessed 10 Apr. 2026. ↩︎
  22. Collier, Kevin. “Discord used by extremists to recruit US youth, officials warned” 24 Sept. 2025, www.nbcnews.com/tech/security/discord-messages-server-dhs-terror-extremist-charlie-kirk-fbi-patel-rcna232377. Accessed 10 Apr. 2026. ↩︎

Stay Connected

Share This Article
Follow:

An avid reader of all kinds of literature, Joshita has written on various fascinating topics across many sites. She wishes to travel worldwide and complete her long and exciting bucket list.

Education and Experience

  • MA (English)
  • Specialization in English Language & English Literature

Certifications/Qualifications

  • MA in English
  • BA in English (Honours)
  • Certificate in Editing and Publishing

Skills

  • Content Writing
  • Creative Writing
  • Computer and Information Technology Application
  • Editing
  • Proficient in Multiple Languages
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *