Icy Tales

Workers Say Alignerr Owes Them Thousands. On Reddit, Their Complaints Keep Disappearing.

Icy Tales Team
27 Min Read

Post Author

“As per our records you have been removed from the Alignerr program. This decision is final, and we will not be reviewing any requests for reinstatement. Please note that Support is unable to assist with matters related to program removal.”

The worker who received this message, we’ll call her Maria, had completed sixty-two tasks that week. Her quality rating was 4.4 out of 5, well above the platform average. She had just been promoted to reviewer status, a position of trust that suggested the company valued her judgment. And now, $2,100 in earnings had evaporated. The dashboard that once displayed her performance metrics showed nothing. Her work history had been deleted. The Slack channels where she’d collaborated with colleagues had ejected her. The Discord server returned an error. It was as if she had never existed.

Maria is not her real name. Like the dozens of workers I spoke with over the course of this investigation, she asked to remain anonymous, fearing that speaking publicly could torpedo her chances of finding work on other platforms in the small, incestuous world of AI data annotation. But her story is not unique. Scroll through Glassdoor, mine the depths of Reddit, lurk in the private Discord servers where displaced annotation workers gather to compare notes, and you’ll find the same narrative repeated with eerie precision: Workers are hired. Workers complete tasks. Workers accumulate earnings. And then, often just before those earnings are scheduled to be paid out, workers are terminated — without warning, without explanation, without recourse, and without their money.

What makes Alignerr’s operation particularly sophisticated isn’t just the alleged wage theft. It’s what happens afterward. When workers try to warn others, their posts vanish from the company-controlled subreddit. When they persist, their accounts are banned. And when complaints surface in forums the company doesn’t control, a curious cast of characters materializes to dismiss, deflect, and discredit.

This is the story of an invisible workforce that powers the AI revolution — and the machinery designed to keep them silent.

Labelbox, the San Francisco company that operates Alignerr, has raised over $188 million in venture capital. Its client list reads like a Fortune 500 roster. Its website boasts SOC2 Type II certification, GDPR compliance, HIPAA compliance — all the badges that signal corporate legitimacy. The company describes its workforce as “our elite Alignerrs network, a powerhouse of global experts shaping cutting-edge models with evaluations and bespoke data.”

The pitch to these “elite experts” is seductive: flexible hours, remote work, intellectually stimulating tasks, and pay rates ranging from $15 to $150 per hour depending on expertise. For someone with a graduate degree working from a small apartment in Manila or Nairobi or rural Ohio, those numbers represent real money — life-changing money, potentially.

What the pitch doesn’t mention is what happens when the company decides it no longer needs you.

On Reddit’s r/selfemployed forum, a post appeared several months ago that would become a gathering point for Alignerr’s discarded workforce. The author, posting under the username u/Wooden_Ad1472, laid out a story that would prove representative:

“I worked for them but had one team member who constantly asserted herself in the chat to appear in charge – passive aggressive. We had a young PC (anime was his profile) who preferred to be more off than on and he began to rely on her to intercede for him. When I called this out I was deactivated. Got a bogus letter after seeking help to navigate the disgusting politics and was met with a cruel and ridiculous treatise about violating policies. I have an excellent education and experience and provided work that was highly rated by them. They left owing me $975. I plan on retaining legal assistance.”

The post accumulated 85 upvotes — significant engagement for a niche complaint about a company most people have never heard of. But it was the comments section that revealed the scope of the problem.

“They are indeed a scam,” wrote u/jaithere. “They have deactivated and withheld wages for far less (in fact, without giving any reason at all) to loads of employees. Someone was organizing a class action on twitter. I got banned from the alignerr sub for calling them out about this.”

“I thought the same thing, wait until it happens to you,” added u/Even-Ad-3759. “They paid me consistently for 4 months until they didn’t. I asked about 6 weeks’ worth of back pay, and they deactivated my account.” In a follow-up comment, the same user quantified the damage: “They still owe me $800 from 2 months ago.”

“Unfortunately, I must agree,” wrote u/ThumbsUpForCake. “Alignerr is indeed a scam. I haven’t received payment for one project I worked on in September, and when I stated that on their subreddit, I got banned.”

The testimonies kept coming. u/purposefulCA had deleted their account entirely, driven away by “too many unpaid Eval projects with no real Prod projects. They expect you to spend hours proving that you can do the work, without really having real work.” u/Itmine28 reported that “Alignerr claimed I didn’t do any task and rejected my manually inserted working time.” u/WeirdBluePerception described being muted and warned simply for asking questions: “I asked a question about an eval on my dashboard and kept getting silenced when I got met with a generic response from the Mod. I mentioned none of the channels are active and he said to ask in the channel. Wtf? Then I got a warning. That is abusive. Like how am I supposed to get clarification if you just mute me and accuse me of spam?”

The dollar amounts varied — $975 here, $800 there, reports on Glassdoor documenting figures from $126 to over $3,000 — but the pattern was identical. Workers complete tasks. Workers pass quality metrics. Workers are terminated immediately before payment. Workers are denied access to the evidence that would prove their claims. Workers who complain publicly are silenced.


The silencing mechanism deserves particular attention, because it reveals the sophistication of the operation.

Reddit, for those unfamiliar with its architecture, is organized into “subreddits” — individual communities devoted to specific topics. Anyone can create a subreddit, and whoever creates it becomes its moderator, wielding essentially unlimited power over what content appears and who gets to participate. If a company or its representatives create the subreddit devoted to discussing that company, they control the narrative entirely.

Alignerr has such a subreddit: r/alignerr. And according to multiple workers, it functions less as a community forum than as a reputation management tool.

“They’re also deleting any comments on their sub that call them out for the unpaid wages,” explained u/LurkSkyStalker, providing the most detailed account of the suppression mechanism. “The mod will message you publicly saying ‘We’re listening, I just DMed you’ and then instruct you to email support in that DM. You’ll never speak to a human from support and the mod will never respond after that.”

Read that again, because the technique is elegant in its cynicism. A worker posts a complaint about unpaid wages. The moderator responds publicly — visibly, for anyone reading the thread — with a performance of concern: “We’re listening, I just DMed you.” This creates the impression of a responsive company taking the issue seriously. But the direct message simply redirects the worker to email support, which sends form letters and resolves nothing. Meanwhile, the public complaint can be removed for “being handled privately,” and the moderator never follows up. The worker has been neutralized, the public record has been sanitized, and anyone researching Alignerr sees only the performance of corporate responsiveness.

Multiple workers confirmed experiencing this exact sequence. “I got banned from the alignerr sub for calling them out about this,” wrote u/jaithere. “When I stated that on their subreddit, I got banned,” wrote u/ThumbsUpForCake. “I got banned from their platform for this,” added u/rahul_vancouver, referring to the Reddit discussion itself.

The suppression extends beyond the subreddit. Workers report being simultaneously removed from Alignerr’s platform, ejected from its Slack workspace, and banned from its Discord server — a coordinated severance of every channel through which they might communicate with former colleagues or share evidence of their experiences. One Glassdoor reviewer stated it plainly: “They will also kick you from their Discord, refuse to answer emails, and completely ghost you. Furthermore, they run their own Reddit, so if you want to post anything there, it will get deleted as well.”


But perhaps the most revealing element of Alignerr’s information management isn’t the suppression of criticism. It’s the deployment of defenders.

When I began investigating the Reddit ecosystem around AI annotation platforms, one account kept appearing with unusual frequency: u/trivialremote. The account would materialize in threads where workers complained about non-payment or mistreatment, consistently defending the platforms and dismissing the complainers. The pattern was striking enough to warrant deeper examination.

What I found was illuminating.

The account is ten years old — an eternity in Reddit terms, and a significant detail. In the world of online reputation management, aged accounts are valuable commodities. They carry credibility that new accounts lack. An entire gray market exists for buying and selling aged Reddit accounts, which can then be repurposed for promotional activity without triggering the suspicion that a newly-created account would invite. A ten-year-old account with nearly 12,000 karma points is, in reputation management terms, a valuable asset.

The account’s “Active in” communities, displayed publicly on Reddit’s profile page, tell their own story: r/outlier_ai and r/alignerr. These are the two primary subreddits devoted to AI annotation work — the exact spaces where reputation defense would be most valuable.

But it’s the posting history that proves most revealing.

The vast majority of u/trivialremote’s comments appear in r/outlier_ai, the subreddit for Outlier (owned by Scale AI, a major player in the AI data annotation industry). The comments follow a consistent pattern: defending the platform, dismissing worker complaints, and placing blame on the workers themselves.

When a worker expressed frustration about unreliable income, u/trivialremote responded: “Where did you get the impression that this platform would provide you with a reliable income that you can live on? The good news is, with your experience and time in the work force, you should have a comfortable amount of runway saved so that this isn’t ‘necessary’ work for you. Right?”

The rhetorical technique is worth examining. The question reframes the issue: instead of asking why a company doesn’t pay its workers, it asks why workers are expected to be paid. The follow-up implies that anyone without substantial savings is financially irresponsible. The burden shifts entirely to the worker.

When another worker described struggling after years of contract work, u/trivialremote was less subtle: “Stubbornly insisting on living paycheck to paycheck for 10 years is a choice.” And then, twisting the knife: “Taking an impromptu vacation to Maldives this weekend, good luck with your refresh button Mr(s) 10-years-but-0-runway lmao.”

When Scale AI announced layoffs affecting quality managers, u/trivialremote’s response was chilling: “Just researched, they only laid off a small percentage of worthless QMs. QMs often make peanuts and don’t really contribute anything worthwhile to the platform. Good business move tbh, giving the boot to the lower ranks of contributors like that.”

“Worthless.” “Make peanuts.” “Lower ranks.” “Giving the boot.” This is not the language of a fellow worker sympathizing with colleagues who lost their jobs. This is the language of management — or someone who has so thoroughly internalized management’s perspective that the distinction ceases to matter.

The account’s perspective on worker concerns was perhaps best summarized in a comment that received -23 downvotes — an unusually strong rejection by the Reddit community: “Then again, this is just one platform, and it’s just beer and vacation money, so no big deal either way.”

For u/trivialremote, the income that workers depend on is “beer and vacation money.” For the single mother working nights after her special-needs son falls asleep, described in the same Reddit thread by u/Solid_Duck_5466 — “I’m looking for other side gigs because I’m a single mom and I can’t really go punch a clock somewhere else” — it’s something else entirely.

But the most revealing comment from u/trivialremote wasn’t about workers at all. It was about access:

“Forwarded this context from u/ThatPlankton7895 to Alex, typically allow a 48 hour override from previous decisions.”

Read that carefully. The account claims the ability to forward user information to someone named “Alex” — apparently a company representative — and to facilitate “48 hour overrides from previous decisions.” This is not the language of an independent contractor. This is the language of someone with insider access, someone who can influence company decisions about worker accounts.

Whether u/trivialremote is an employee, a paid reputation manager, or simply an unusually devoted platform loyalist with inexplicable access to company decision-makers, the effect is the same: workers who speak up face immediate pushback from an account that presents itself as a fellow worker while deploying management’s perspective and apparently wielding management’s tools.

The account’s broader Reddit activity — occasional comments on r/slaythespire, a video game subreddit; r/PTCGP; r/unpopularopinion — provides cover. It makes the account appear to be a real person with varied interests rather than a reputation management vehicle. But the overwhelming concentration of activity in r/outlier_ai and r/alignerr, combined with the consistent defense of platform practices, the dismissal of worker concerns, and the claimed insider access, paints a different picture.

Alt Text: Illustration of a digital task management interface with task checklists and productivity features.
Reddit account /rahul_vancouver asks trivialremote about evidence they made five figure incomes from Alignerr, to which trivialremote isn’t able to respond.
Travel blog with captivating tales and tips.
In one comment, he seemed to avoid backing up his claims and resort to whataboutery.

There’s a term for what may be happening here: astroturfing. It refers to the practice of creating the appearance of grassroots support — or in this case, grassroots dismissal of criticism — through coordinated or compensated activity. The “astro” refers to AstroTurf, the artificial grass: it looks like the real thing but isn’t organic.

Reddit is particularly vulnerable to astroturfing because of its structure. The platform relies on the assumption that upvotes and comments represent genuine community sentiment. When that assumption is violated — when accounts are purchased, when activity is coordinated, when employees pose as independent users — the information environment becomes corrupted. Workers researching whether Alignerr is legitimate encounter what appears to be a community divided on the question, when in fact they may be seeing a manufactured debate.

A 2024 University of Michigan study on Reddit moderation found that moderator bias significantly shapes what content survives on the platform. When moderators have aligned interests with a company — or are the company — critical content faces systematic suppression. The study’s findings align precisely with what Alignerr workers describe: a subreddit where complaints are removed, complainers are banned, and the remaining discussion skews artificially positive.


The legal architecture that enables all of this is worth understanding, because it explains why companies like Alignerr can operate this way without obvious consequence.

Every Alignerr worker is classified as an independent contractor, not an employee. This single classification strips away virtually all labor protections: no minimum wage guarantees, no overtime requirements, no unemployment insurance, no workers’ compensation, no protection against wrongful termination, no right to collective bargaining. Workers are, legally speaking, small businesses contracting with a larger business — even though they use Alignerr’s tools, follow Alignerr’s guidelines, are monitored by Alignerr’s quality systems, and can be terminated at Alignerr’s whim.

The workers are also scattered across the globe — the United States, Canada, Australia, the Philippines, India, Brazil, Eastern Europe. When a worker in Manila is owed $500 by a company in San Francisco operating through corporate structures that may involve multiple entities, the cost of pursuing legal action exceeds the amount owed many times over. This jurisdictional fragmentation isn’t a bug; it appears to be a feature.

The contracts workers sign grant Alignerr broad discretion. As u/Reasonable_Ad7517 discovered: “The contract states that it is at their complete discretion to decide whether the work is acceptable and you will be paid for said work.” Combined with the company’s practice of deleting performance records, this creates a system where Alignerr can always claim work was unsatisfactory — and workers have no evidence to prove otherwise, because that evidence has been destroyed.

One Reddit user, u/o-m-g_embarrassing, laid out the legal argument in detail:

“Alignerr’s actions constitute a labor-fraud scheme because they: Induced work under false pretenses, Terminated employment without cause, and Withheld earned compensation using fabricated policy claims. That combination — deception, control, and financial loss — meets the definition of a scam.”

Whether it meets the legal definition of fraud is a question for courts. What’s clear is that it meets the functional definition: workers are doing work, not getting paid, and being silenced when they complain.


The human cost of all this extends beyond unpaid invoices.

“These sites actually over time really break down our sense of self,” wrote u/Wooden_Ad1472, the original poster in the Reddit thread that became a gathering point for Alignerr’s displaced workers.

That line stayed with me. It’s not just about the $975 this particular worker is owed, though that money matters. It’s about what happens when you invest your expertise, your time, your cognitive labor into work you believe will be compensated — and receive in return not just non-payment, but gaslighting. You’re told you violated policies that were never clearly communicated. You’re told your work was subpar when your metrics showed otherwise. You’re told to “move on” and “accept the outcomes” by anonymous Reddit accounts that may or may not be your former employer in disguise.

Over time, this breaks something. Trust. Self-worth. The belief that hard work and competence will be rewarded. The basic social contract that underlies employment.

u/Better_Profession474 offered perhaps the darkest assessment of Alignerr’s business model: “They primarily use unpaid evals to train their LLMs.”

If this is true — and multiple workers corroborated the pattern of extensive unpaid “evaluation” work with minimal paid production work — then the exploitation is even more fundamental than it appears. Workers believe they’re completing evaluations to qualify for paid assignments. But the evaluations themselves may be the product. The company harvests the cognitive labor for free, using the promise of future paid work as the inducement, then discards workers once their unpaid contributions have been extracted.

“Makes total sense,” replied u/Wooden_Ad1472. “They had thousands of rubrics listed as unpaid evals lol. Sad.”

Thousands of rubrics. Each one representing hours of human judgment, human expertise, human labor. All of it extracted for free, under the guise of a hiring process.


The AI revolution, we’re told, will transform everything. It will write our emails, diagnose our diseases, drive our cars, manage our investments, create our art. What we’re rarely told is who’s building it: a global workforce of invisible laborers, many of them working from their bedrooms in countries where $20 an hour represents extraordinary wealth, all of them disposable, all of them atomized, all of them one arbitrary termination away from losing everything they’ve earned.

Alignerr didn’t invent this model. As u/Itmine28 observed: “All LLM companies are like this. I’ve already encountered this countless times. Appen, Outlier, and now this. They’re all the same. They’re only nice to you because you’re still new. Wait until 6 months in, and they’ll show their true colors.”

But Alignerr, operating under the umbrella of a well-funded Silicon Valley company with Fortune 500 clients and industry-standard compliance certifications, represents the model in its mature form: sophisticated enough to maintain deniability, distributed enough to evade accountability, and — if the Reddit patterns are any indication — media-savvy enough to manage its own narrative.

The workers fighting back are doing so with the only tools available to them: screenshots, Reddit posts, Glassdoor reviews, the fragile solidarity of private Discord servers where they share information the company would prefer they didn’t have. They’re advised to file complaints with labor boards, to pursue small claims court, to document everything in case a class action materializes.

“Your local labor board will be interested in your unpaid wages and should be contacted,” wrote u/LurkSkyStalker.

“Just go down to the courthouse and file a small claims suit, no need for a lawyer,” suggested u/Away_Department_8480.

These are the strategies of people who have no institutional power, fighting against a company that holds all the structural advantages. Some will recover their money. Most won’t. The company will continue operating. The venture capital will continue flowing. The Fortune 500 clients will continue receiving their labeled data. And somewhere, a worker will complete their sixty-second task of the week, check their dashboard one last time before bed, and wake up to find that they no longer exist.


If you have information about practices at Alignerr, Labelbox, or similar companies, contact the author.

Stay Connected

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *