Icy Tales

Algorithms Aren’t Enough: The Strain and Secrets of TikTok’s Moderation Pipeline

Joshita
By
34 Min Read

Post Author

TikTok says it uses both humans and machines to enforce its Community Guidelines. Their Newsroom1 publicly states that “over 85% of the content removed for violating our Community Guidelines is identified and taken down by automation” before it is ever reported by users. That statistic sounds like a victory for AI efficiency. It suggests speed, precision, and a system that can scale with billions of videos, comments, and livestreams posted every month.

But the story does not end with automation. The human work of moderation still matters, and it matters in the hardest places. Humans step in where machines fail. They review borderline cases. They evaluate context. They judge subtle harm. They interpret cultural nuance. They make decisions when content sits in gray areas that AI cannot confidently understand. This includes satire, political content, misinformation, hate speech coded through slang, and graphic material that requires careful interpretation of intent. In other words, people deal with exactly the content that cannot be simplified into neat rules.

Behind the platform’s smooth user experience is a complex moderation pipeline. First, automated detection systems flag content based on patterns, machine learning models, and policy categories. Some videos are taken down instantly. Others are queued for review. That is where human moderators enter. They sit through long shifts watching repetitive, disturbing, controversial, or emotionally heavy material. They tag content. They approve or reject posts. Their decisions shape what billions of users see. Their decisions also determine what disappears quietly from the public feed.

Yet this workforce is operating inside a system that moves fast and changes constantly. TikTok’s policies do not sit still. The platform updates rules weekly, sometimes even daily, in response to new trends, political pressure, global events, viral challenges, and public criticism. A joke that was allowed yesterday might be banned tomorrow. A dance trend could suddenly become classified as risky. A news clip could be labeled misinformation after new facts emerge. Workers say this creates a “moving target” style of enforcement where certainty is rare, and confusion is frequent.

So while TikTok’s public narrative emphasizes automation and AI-driven moderation, the reality is still deeply human. Real people continue to carry the emotional, mental, and ethical burden of maintaining the platform’s safety. And they are doing that work inside a pipeline that moves fast, changes frequently, and rarely slows down for the people asked to keep it running.

Algorithms Aren’t Enough: The Strain and Secrets of TikTok’s Moderation Pipeline 2

Who Is Controlling TikTok’s Moderation?

TikTok’s content moderation and policy2 enforcement are not handled by a single person. Instead, they are managed by a combination of senior executives, global trust and safety leadership, and specialized policy teams — all embedded within the company’s broader organizational hierarchy.

At the very top sits Shou Zi Chew, the Chief Executive Officer (CEO) of TikTok. Chew, a Singaporean business executive and former CFO of ByteDance, oversees the company’s global strategy, including content moderation strategy and regulatory responses. All major operational decisions, including trust and safety priorities, ultimately report up through his office.

Below the CEO, content moderation and trust & safety operations are overseen by senior leadership aligned with TikTok’s global operations. According to Reuters3 reporting tied to internal memos, Adam Presser, who serves as TikTok’s Operations Head, also oversees the company’s Trust & Safety unit — the division responsible for enforcement, moderation staffing, and implementation of policy decisions. Presser reportedly sent internal notices to staff about layoffs and the restructuring of the trust and safety team amid TikTok’s push toward automation.

On the policy development side, TikTok also engages external experts through advisory groups. According to GW Law4, the platform’s Content Advisory Council is chaired by Dawn Nunziato, a law professor and specialist in internet governance and free expression issues. This council is intended to advise TikTok on broader content policy issues (such as hate speech, misinformation, and bullying), offering an external perspective on internal decision‑making.

Algorithms Aren’t Enough: The Strain and Secrets of TikTok’s Moderation Pipeline 3

Further shaping moderation priorities, TikTok’s evolving internal job postings and Reddit discussions about Trust and Safety Policy roles show that the company maintains teams dedicated to drafting, interpreting, and operationalizing new policies around harmful content and legal requests. These roles often bridge technical, legal, and enforcement channels within the company.

There are also policy specialists and legal operations teams within TikTok who act as intermediaries between moderation enforcement and requests from external institutions, including law enforcement or child safety agencies. According to workers familiar with these teams, they handle escalation of serious cases — such as child safety or law enforcement cooperation — indicating that trust and safety leadership is not solely about content takedowns but also about legal compliance frameworks.

What this structure shows is that decisions around moderation are shaped by a layered leadership model:

  • Top executive oversight by the CEO and operations leadership (e.g., Shou Zi Chew and Adam Presser).
  • Trust & Safety leadership teams that manage policy enforcement strategy and regional moderation operations.
  • US and regional safety heads who tailor enforcement toward local regulations and legal frameworks.
  • External advisory councils that influence high‑level guidelines and ethical considerations.
  • Specialist policy and legal teams that interpret and escalate complex cases.

TikTok publicly states that safety and content enforcement are a core commitment and a continuous process — emphasizing both technology and human review — but the internal power to shape and control these systems ultimately sits with a handful of senior executives and specialized teams within the company’s global hierarchy.

What Moderators Do Every Day and the Human Cost of It

Moderators spend their workdays inside an environment most users will never see. Their core job is to review short videos, livestream clips, comments, captions, duets, and user reports. Every piece of content is checked against TikTok’s constantly evolving Community Guidelines. They are trained to identify violations ranging from hate speech and extremist propaganda to graphic violence, CSAM, harassment, misinformation, scams, and self-harm content. Much of this happens at speed. Moderators often have only seconds to watch a video, analyze intent, match it to policy language, and decide whether it stays online or disappears.

Algorithms Aren’t Enough: The Strain and Secrets of TikTok’s Moderation Pipeline 4

The work requires intense focus. Moderators track subtle signals like symbols, coded language, gestures, political context, cultural references, and evolving slang. They must learn regional differences. Content that is harmless in one country may violate the law in another. They deal with borderline cases that AI systems are unsure about. They also handle user reports, which can range from serious issues to petty disputes, meaning they must separate genuine risk from false alarms.

A significant part of the job is exposure to distressing material. In one Reddit AMA, a TikTok content reviewer explained that their team “reviews videos, comments, and accounts for policy violations” and confirmed that experiences vary depending on department. Some reviewers mostly see routine violations like spam or minor rule-breaking. Others, especially those placed in high-risk categories, encounter deeply disturbing content. Workers have described reviewing material that includes sexual abuse involving minors, graphic violence, and extreme self-harm. These teams handle the “worst of the internet” so that regular users never have to see it.

This is not occasional exposure. It is a daily reality for many. Moderators cycle through hundreds or thousands of items per shift. They often work under strict performance targets measured by speed, accuracy, and consistency with policy updates. Every decision is monitored and audited. Mistakes can result in warnings or penalties. At the same time, emotional resilience is expected as part of the job. Many workers describe coping with psychological strain while still being required to remain efficient and precise.

The toll of content moderation goes far beyond repetitive, tedious work. For many moderators, repeated exposure to graphic, violent, or abusive content has led to documented psychological harm. Independent investigations and first‑person accounts make clear that the emotional impact of this work can be deep and long‑lasting.

Based on Reports of the Business and Human Rights Centre5, we have data that in Turkey, current and former TikTok moderators employed by Telus Digital (a contractor that reviews TikTok content) described major mental health struggles after repeatedly seeing disturbing material such as child abuse, extreme violence, self‑harm, animal cruelty, and terrorism content as part of their daily duties. Almost all of those interviewed for a Bureau of Investigative Journalism report said their work

“affected me mentally for a very long time and I still have those scars on me … What I had witnessed … I also saw these things in my dreams.”

Another Turkish moderator explained that despite witnessing deeply distressing videos, they were not given adequate preparation or workplace safeguards. “I wasn’t given training … it felt very sudden,” they said, describing how little was done to protect their emotional well-being.

These accounts are backed up by union and worker‑advocacy perspectives. Christy Hoffman, general secretary of the global union federation UNI Global, criticized the lack of both mental health care and “fair pay that recognises their crucial role in shielding the public from dangerous content.”

This pattern of emotional harm is not unique to Turkey. Multiple Reddit posts from current and former moderators reveal the psychological strain involved. One moderator described nightmares, panic attacks, and PTSD symptoms after repeated exposure to extreme content, including live violence and child sexual abuse material. After submitting a doctor’s letter recommending reassignment, they reported the company ultimately placed them on unpaid leave and threatened termination. They wrote:

“They caused real psychological harm and now they’re just planning on cutting me loose.”

The human cost also shows up in industry‑wide reporting on similar moderation work. Although not specific to TikTok, a 2025 TIME6 article noted that roughly 81% of moderators reported inadequate mental health support from employers, and that hopeful union‑driven safety standards now aim to limit daily exposure, enforce living wages, and provide long‑term psychological care.

Beyond direct trauma, moderators face structural challenges. In the Turkish cases, workers are subject to strict accuracy targets with a minimum performance threshold, and they often earn wages that struggle to keep pace with local living costs. TBIJ7 reports that salaries were said to be between 19,000 and 35,000 Turkish lira per month, close to or below the regional minimum wage, despite the intensity of the work and high emotional risk.

Even access to psychological care can be limited. Moderators in Turkey were theoretically allowed to schedule counselling sessions — but only during scheduled breaks. One worker said many avoid these sessions altogether, because

“when we go to the psychologist our wellness breaks are cut off, so nobody is going … I prefer smoking.”

Policies That Change Too Fast

TikTok updates its Community Guidelines and enforcement strategies frequently as new issues arise on the platform. The official guidelines themselves state that they are “updated on an ongoing basis” to address new risks and harms and to clarify what is permissible on the app. The text explains that every rule section has definitions, examples, and clarifications added as the environment evolves.

In 2025, TikTok continued this process with a set of new updates aimed at tightening rules around misinformation and content misuse. One recent report on these changes noted stronger language around AI‑generated content, misinformation about public events, and misuse of automation tools designed to bypass platform systems. TikTok said it expanded guidelines so that “some edited or AI‑generated content can still be harmful” and must not mislead about matters of public importance. The company also said it would offer clearer summaries — “rules‑at‑a‑glance” — so creators can better understand what the policies require.

In a company statement included in those updates, TikTok8 framed the changes as helping creators “better understand obligations” and provide more information about how our policies are applied in different regions. The platform also emphasised enforcement transparency and appealing processes for users who believe content was removed incorrectly.

However, public reactions and moderator accounts suggest that these frequent modifications have real effects in practice. Reddit threads from TikTok users show frustration over shifting enforcement outcomes shortly after rule changes. One thread criticising a 2025 policy update said TikTok changed how strikes and appeals are handled — a user wrote that even when an appeal was accepted, “the strike stays on your account for the whole 90 days,” meaning users remain penalised even after being vindicated.

Other Reddit commenters reported unpredictable enforcement following policy changes. One poster said their artistic video was muted or restricted as “misinformation,” even though the creator felt there was no harmful content, and no clear explanation was provided. The writer described confusion and lack of clarity around the updated guidelines: “I don’t understand how this is possible” when violations appear incoherent with the posted content.

These user experiences echo testimony from former moderators about the strain caused by constant rule changes. In a Guardian diary9 by a TikTok content reviewer, the moderator described a real challenge: when faced with rapidly evolving global news or crises — such as ongoing wars or disasters — teams often receive clarifications to policy minutes, hours, or even days after they have already made moderation decisions on that content. In their words:

“If it’s a currently unfolding event … we might get advice minutes, hours, or days later after we have already made decisions.”

According to the same Guardian account, constant tweaks to policies create ambiguity that forces moderators to escalate complex cases to specialist advisers. These advisers are meant to have extra training and access to incoming policy updates, but reaching them for guidance can delay decisions on urgent content because responses are neither instant nor uniformly available.

Taken together, frequent policy revisions, limited clarity around enforcement, and modifiers for real‑time events mean moderators must absorb new rules quickly, even as they work under pressure to make high‑stakes enforcement decisions. Both user complaints and moderator testimony illustrate that these rapid updates do not just live on paper — they actively shape the workflows, consistency, and outcomes of TikTok’s moderation practice in ways that are often confusing and unpredictable for workers and users alike.

Algorithms Aren’t Enough: The Strain and Secrets of TikTok’s Moderation Pipeline 5

Shift to AI and Global Restructuring

TikTok’s drive to automate content moderation has reshaped its workforce. As reported by Reuters10, in late 2024 and early 2025, the company cut hundreds of jobs in regions such as Malaysia as part of a strategy to lean more heavily on artificial intelligence and automation for content review. Reports from multiple outlets note that TikTok told staff the layoffs were part of efforts “to further strengthen our global operating model for content moderation,” reflecting a shift toward automated enforcement instead of purely human review. This shift was linked to investments in machine detection systems that now remove a large share of violating posts automatically.

According to ISIS11, in Malaysia, where TikTok historically employed large numbers of moderators, government officials confirmed that nearly 500 employees were laid off from moderation teams and related roles. Sprinting toward automation and outsourcing, TikTok said it would still maintain moderation operations in the country but would reduce the human workforce as machine systems took on repetitive tasks. Some employees told reporters they were told most duties would be outsourced to external partners, highlighting a structural shift in how enforcement work is organized.

This shift is not limited to Southeast Asia. In Berlin, Germany, TikTok’s Trust and Safety team — the unit responsible for screening harmful posts in German‑language content — faced mass layoffs as part of global restructuring. Euronews12 reported that around 150 moderators were set to lose their jobs as TikTok planned to replace the in‑house team with artificial intelligence systems and outsourced contractors. Workers protested the move with strikes under the slogan “We trained your machines, pay us what we deserve!” and demanded collective bargaining agreements with protections for severance and notice periods.

Employees and union representatives criticised the decision, arguing that AI cannot replicate the nuanced judgment human moderators bring to sensitive issues such as hate speech, self‑harm material, and misinformation. One union spokesperson said this raised serious concerns about platform safety and compliance with European regulations like the Digital Services Act, which require effective human oversight.

The impact has also been felt in the United Kingdom. Plans surfaced for hundreds of job cuts in TikTok’s UK moderation workforce, with reports from The Guardian13 that the company might reduce about 439 content moderator roles as part of restructuring and automation efforts. A coalition including the Trades Union Congress and the Communication Workers Union pushed UK MPs to investigate TikTok’s plans, warning that replacing critical safety roles with AI and low‑paid external labour could expose users — especially children — to harms such as deep‑fakes and abuse. From data by ISO14, we have data of the letter citing UK data showing that up to 1.4 million TikTok users are under age 13, underscoring concerns about the stakes of moderation decisions.

Algorithms Aren’t Enough: The Strain and Secrets of TikTok’s Moderation Pipeline 6

Reddit threads from moderators and industry observers echo these developments. One post highlighted that TikTok’s automation pivot resulted in layoffs and uncertainty for moderators, with some UK workers noting that cuts increased after attempts to unionize, suggesting a link between labour organizing and restructuring pressures.

Across multiple regions, the pattern is consistent: TikTok publicly frames these shifts as efforts to boost efficiency and effectiveness while investing billions in trust and safety. At the same time, the move toward AI and outsourced labour has drawn criticism from moderators, unions, and online safety advocates who argue that these systems are “unproven” in delivering nuanced and reliable moderation outcomes and that replacing trained staff with cheaper labour risks lower quality enforcement and greater harm.

In short, TikTok’s evolving moderation model is reshaping the workforce globally. Human moderators are being reduced in number even as machine systems take on a larger share of policy enforcement. The shift has prompted protests, union action, and political scrutiny, spotlighting the tensions between automation, worker rights, and online safety in the digital age.

Worker Frustrations, Public Feedback, and Outsourced Moderation

TikTok’s moderation system is often invisible to regular users, yet it directly shapes what billions of people see every day. This has created frustration not just among moderators, but among the platform’s user base. On Reddit and other online forums, users frequently share experiences of inconsistencies in moderation outcomes. Posts highlight situations where innocent content is removed while clearly harmful posts remain online. One Redditor15 summarized their experience bluntly:

“I get 5–6 strikes a day… common posts get removed while racist comments remain.”

Another user described repeated bans for minor infractions, while blatant rule violations were ignored, creating the impression that moderation was arbitrary or capricious. Such posts reflect broader concerns that TikTok’s policies change too quickly, and enforcement is unpredictable: what is acceptable today might be banned tomorrow.

Algorithms Aren’t Enough: The Strain and Secrets of TikTok’s Moderation Pipeline 7

These inconsistencies are tied directly to how TikTok manages its moderation workforce. Much of the company’s human review is outsourced to contractors in countries such as Colombia, Kenya, and the Philippines. In Colombia, for example, moderators reportedly work six-day workweeks reviewing highly disturbing content for low pay. Reddit discussions from former moderators describe their work as a “never-ending battle” against violent videos, child abuse imagery, self-harm content, and harassment. One worker explained that the emotional strain was compounded by strict quotas and unrealistic performance expectations.

Outsourced work introduces additional risks beyond emotional stress. Investigations and moderator testimonies suggest that contractors sometimes have insecure access to sensitive material, including spreadsheets of violent images and unprotected databases of flagged content. These practices raise concerns about both worker safety and data protection. As one discussion noted, contractors were “shown files that should never be accessible without strict supervision,” highlighting systemic lapses in information security.

Pay and working conditions have also drawn criticism. Outsourced moderators often earn salaries below local living standards, despite performing high-risk, psychologically taxing work. In Colombia and Turkey, wages ranged from approximately £400 to £738 per month, while moderators reviewed some of the most graphic content available online. In some cases, employees reported having to work overtime or skip breaks to meet targets, further increasing stress and fatigue.

Moderators and whistleblowers have also pointed to insufficient mental health support. While some employers provide optional counseling or wellness sessions, workers report that these are difficult to access during shifts and rarely sufficient for dealing with repeated exposure to extreme content. One poster reflected:

“You are staring at videos of violence, abuse, and death all day, and the only support you get is a chat with a psychologist if you’re lucky… and it’s not enough to unsee what you’ve seen.”

Taken together, these factors create a cycle of frustration for both moderators and users. Rapid policy changes, inconsistent enforcement, outsourced labor, low pay, and inadequate support combine to make TikTok’s moderation system both unpredictable and emotionally taxing. Users encounter arbitrary removals or overlooked violations, while moderators face high-stress, poorly compensated work that leaves them exposed to extreme material without proper safeguards. The system, as one labor analyst has stated, as UNI16 put it, relies on

“human resilience to patch gaps that automation cannot fill, yet it often fails to support those very humans.”

Moderation, Accuracy, Automation Trade‑offs, Worker Rights, and TikTok’s Public Line

Behind TikTok’s claims of fast, large‑scale safety enforcement lies a series of difficult trade‑offs between what machines can do, what humans must still do, and how workers are treated when they do it. Independent research, worker testimony, and company statements all show a system that is powerful in some respects and deeply imperfect in others.

TikTok increasingly markets its content moderation as powered by cutting‑edge artificial intelligence. The company’s 2024 transparency disclosures claim that over 85 percent of violating content is initially flagged by automation before it is reported by users. TikTok frames this as evidence that AI is identifying harmful material efficiently at scale.

Yet academic and technical research indicates there are limits to automation’s current effectiveness. A Harvard University study17 on multimodal content detection — which refers to systems that analyze text, audio, and video together — found that even advanced AI models reached only about 89 percent accuracy in classifying harmful content. While that may sound high, the remaining 11 percent consists largely of edge cases, where context, cultural nuance, sarcasm, or evolving slang play a role — precisely the situations machines struggle with most. These are the scenarios that require human judgment.

Researchers also note that human‑labeled training data is essential for building and improving these systems. But human labeling itself comes with risks. When moderators disagree or make mistakes — for example, misclassifying a harmless post as a violation — those errors become part of the dataset that trains future generations of models. This can unintentionally scale bias and inconsistency rather than eliminate them. In the words of one machine learning expert, as shown by Brookings18:

“Without careful oversight, the very data used to teach AI can bake in the same errors we want to reduce.” (Brookings Institute analysis)

One practical example: a TikTok video that uses satire to condemn hate speech might be flagged as hate speech by text‑based models that do not understand tone. A human reviewer is far more likely to see the context. But if that human label enters training data as “violation,” AI may replicate that misclassification later — a clear demonstration of how automation can amplify rather than fix errors.

Despite these accuracy limits, automation does reduce the amount of harmful content humans must watch directly. In principle, removing the worst material through machines protects moderators from the most psychologically damaging work. But because AI is imperfect, it still requires human oversight. AI can miss violations and can also produce false positives — leading to both under‑enforcement (harmful content slips through) and over‑enforcement (innocent content is removed). This balance means humans remain indispensable — yet vulnerable.

As psychologist Kelly McBride explained in The Atlantic19, content moderation sits at the intersection of emotional labor, cognitive overload, and high-stakes decision‑making. Workers must interpret ambiguous content while also handling heavy workloads, performance targets, and shifting policies. These conditions make it harder to maintain both emotional resilience and consistent accuracy.

The shift toward automation has also sparked unionization efforts and legal disputes. In the United Kingdom, hundreds of moderators facing layoffs and restructuring linked to AI adoption have warned of legal action against TikTok and its contractors. Worker advocates argue that layoffs coincided with union‑organizing activities, a practice they describe as “unlawful detriment” under UK employment law. According to Sky News20, this includes claims that employees were pressured to sign termination agreements. That too often occurred without adequate notice or severance, while union efforts were underway.

Worker rights groups and unions such as the Trades Union Congress have criticized TikTok for replacing skilled local moderators with AI and outsourced labour in countries with weaker protections. A TUC spokesperson told reporters:

“We are seeing the replacement of skilled work with unproven AI‑driven content moderation and with workers in places like Kenya or the Philippines who are subject to gruelling conditions, poverty pay, and minimal support.” (TUC statement, 2025)

Algorithms Aren’t Enough: The Strain and Secrets of TikTok’s Moderation Pipeline 8

These disputes emphasize that moderation work is not only emotional labor, but it is also labor with legal rights, protections, and economic value. Layoffs tied to automation raise questions about collective bargaining rights, fair compensation, and how emerging technology intersects with labor law. Critics argue that framing automation as purely technical obscures the real human impact — from job loss to weakened worker leverage.

So what does TikTok say about moderation?

In response to criticism, TikTok’s own statements emphasize both technology and human oversight. In its newsroom, TikTok affirms that trust and safety have “no finish line” and that a combination of AI plus skilled human moderators is necessary to enforce Community Guidelines at scale. The company says it continues to invest in both automation tools and human teams, even as it evolves its operational model globally.

A TikTok spokesperson told Social Media Today21 that policy updates and enforcement refinements aim to “provide clearer guidance to creators and reviewers alike” and to “deliver consistent results across regions.” The company positions automation as a way to improve speed and reduce harm, not as a complete replacement for human judgment.

The result is a system balanced between competing imperatives:

  • Scale — AI can handle billions of pieces of content quickly.
  • Subtlety — Humans are still needed for nuance and context.
  • Speed — Policy changes outpace clear guidance.
  • Rights — Workers demand fair treatment amid restructuring.

As one researcher put it: “Moderation is the frontier where law, labor, technology, and culture collide.” The systems that govern what stays online and what goes still rely heavily on human judgment, even as they increasingly lean on machines. But that reliance comes with questions — about accuracy, fairness, worker protection, and how a global company should balance efficiency with ethical responsibility.

Ultimately, TikTok’s public messaging emphasizes technological progress, yet real-world practice shows that automation and human labor are inextricably linked, and that the consequences for users, moderators, and society are still unfolding.

Sources

  1. “Create, discover, and connect on TikTok with simpler and stronger Community Guidelines” TikTok, 14 Aug. 2025, newsroom.tiktok.com/en-us/create-discover-and-connect-on-tiktok-with-simpler-and-stronger-community-guidelines. Accessed 12 Jan. 2026. ↩︎
  2. “Content Moderation” www.tiktok.com/euonlinesafety/en/content-moderation/. Accessed 10 Jan. 2026. ↩︎
  3. “Reuters.Com” www.reuters.com/technology/tiktok-restructures-trust-safety-team-lays-off-staff-unit-sources-say-2025-02-20/. Accessed 10 Jan. 2026. ↩︎
  4. Law, GW. “Nunziato Named Chair of TikTok Advisory Council” The George Washington University, www.law.gwu.edu/nunziato-named-chair-tiktok-advisory-council. Accessed 10 Jan. 2026. ↩︎
  5. “TikTok workers sue employer over ‘union-busting’ firings – Business and Human Rights Centre” Business and Human Rights Centre , 15 Mar. 2025, www.business-humanrights.org/en/latest-news/t%C3%BCrkiye-tiktok-content-moderators-sue-employer-over-poor-conditions-and-union-retaliation/. Accessed 12 Jan. 2026. ↩︎
  6. Perrigo, Billy. “Exclusive: Global Safety Rules Aim to Protect AI’s Most Traumatized Workers” TIME, 19 June 2025, time.com/7295662/ai-workers-safety-rules/. Accessed 12 Jan. 2026. ↩︎
  7. McIntyre, Niamh. “TikTok workers sue employer over ‘union-busting’ firings” TBIJ, www.thebureauinvestigates.com/stories/2025-03-15/tiktok-workers-sue-employer-over-union-busting-firings. Accessed 12 Jan. 2026. ↩︎
  8. “Helping creators understand our rules with refreshed Community Guidelines” TikTok, 21 Mar. 2023, newsroom.tiktok.com/en-us/community-guidelines-update. Accessed 12 Jan. 2026. ↩︎
  9. Farah, Hibaq. “Diary of a TikTok moderator: ‘We are the people who sweep up the mess’” The Guardian, 21 Dec. 2023, www.theguardian.com/technology/2023/dec/21/diary-of-a-tiktok-moderator-we-are-the-people-who-sweep-up-the-mess. Accessed 12 Jan. 2026. ↩︎
  10. “Reuters.Com” www.reuters.com/technology/bytedance-cuts-over-700-jobs-malaysia-shift-towards-ai-moderation-sources-say-2024-10-11/. Accessed 12 Jan. 2026. ↩︎
  11. Zainul, Harris. “TikTok owner moves toward AI, lays off 500 Malaysian moderators” ISIS, 12 Oct. 2024, www.isis.org.my/2024/10/12/tiktok-owner-moves-toward-ai-lays-off-500-malaysian-moderators/. Accessed 12 Jan. 2026. ↩︎
  12. Desmarais, Anna. “TikTok content moderators in Germany strike over AI taking their jobs” Euronews, 23 July 2025, www.euronews.com/next/2025/07/23/tiktok-content-moderators-in-germany-strike-over-ai-taking-their-jobs. Accessed 12 Jan. 2026. ↩︎
  13. Milmo, Dan. “UK MPs urged to investigate TikTok’s plans to cut 439 content moderator jobs” The Guardian, 13 Oct. 2025, www.theguardian.com/technology/2025/oct/13/uk-mps-tiktok-plans-cut-content-moderator-jobs. Accessed 12 Jan. 2026. ↩︎
  14. “Annex 2 – MPN” 3 Apr. 2023, ico.org.uk/media2/migrated/4025183/ico.org.uk/media2/migrated/4025183/tiktok-mpn-annex-2-mpn.pdf. Accessed 12 Jan. 2026. ↩︎
  15. www.reddit.com/r/TikTok/comments/1fmw8ni/tiktok_help/. Accessed 12 Jan. 2026. ↩︎
  16. “TikTok content moderators strike in Berlin: “We trained your AI – now pay us!”” uniglobalunion.org/news/tiktok-content-moderators-strike-ai-berlin/. Accessed 12 Jan. 2026. ↩︎
  17. Zaky. “TikGuard: A Deep Learning Transformer-Based Solution for Detecting Unsuitable TikTok Content for Kids” ui.adsabs.harvard.edu/abs/arXiv:2410.00403. Accessed 12 Jan. 2026. ↩︎
  18. Lee, Nicol Turner. “Reimagining the future of data and AI labor in the Global South” Brookings, 7 Oct. 2025, www.brookings.edu/articles/reimagining-the-future-of-data-and-ai-labor-in-the-global-south/. Accessed 12 Jan. 2026. ↩︎
  19. “Thesis Manuscript For Pr Uricchio” 3 May 2012, cmsw.mit.edu/wp/wp-content/uploads/2016/09/cmsw.mit.edu/wp/wp-content/uploads/2016/09/147703396-Florence-Gallez-A-Proposal-for-a-Code-of-Ethics-for-Collaborative-Journalism-in-the-Digital-Age-The-Open-Park-Code.pdf. Accessed 12 Jan. 2026. ↩︎
  20. Carroll, Mickey. “TikTok faces legal action over moderator cuts” Sky News, 19 Dec. 2025, news.sky.com/story/tiktok-faces-legal-action-over-moderator-cuts-13485485. Accessed 12 Jan. 2026. ↩︎
  21. Hutchinson, Andrew. “TikTok Updates Community Guidelines To Address AI Misinformation” Social Media Today, 14 Aug. 2025, www.socialmediatoday.com/news/tiktok-updates-community-guidelines-misinformation-bullying/757740/. Accessed 12 Jan. 2026. ↩︎

Stay Connected

Share This Article
Follow:

An avid reader of all kinds of literature, Joshita has written on various fascinating topics across many sites. She wishes to travel worldwide and complete her long and exciting bucket list.

Education and Experience

  • MA (English)
  • Specialization in English Language & English Literature

Certifications/Qualifications

  • MA in English
  • BA in English (Honours)
  • Certificate in Editing and Publishing

Skills

  • Content Writing
  • Creative Writing
  • Computer and Information Technology Application
  • Editing
  • Proficient in Multiple Languages
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *