Powerful AI that takes care of your daily tasks. Stop manually processing your text, document, and image data. Let AI work its magic, without a single line of code.
As generative AI booms, an invisible workforce powers its rise – and pays the price. Platforms like Outlier.ai, DataAnnotation.tech, Remotasks (Scale AI), and industry veteran Appen promise freelance “AI trainers” flexible, high-paying work to help build advanced models. “
Earn up to $40 an hour… teaching AI models how to write,” one Outlier recruitment message said.
But behind the ads and testimonials lies a very different reality. Workers on these platforms describe algorithmic scoring systems that misjudge their work, mysterious account bans with no explanation, weeks of unpaid training, impossible task deadlines and being treated as disposable cogs in an AI assembly line. Support and accountability are non-existent – many freelancers have to turn to Reddit forums, LinkedIn posts or even lawsuits to get answers.
This investigative report goes beyond Outlier to show how the whole ecosystem of AI data annotation sites operates on the same tactics. DataAnnotation.tech, a platform run by Surge AI (serving clients like Anthropic and Microsoft) , and others like Remotasks (Scale AI’s gig-work arm for OpenAI, Meta, etc.inc.com) and Appen (a long-time data crowdsourcing firm) all do the same.
We’ll hear directly from contributors – via Reddit threads, Medium stories, LinkedIn posts, BBB complaints, Glassdoor/Indeed reviews – to see the human cost of training our machines.
AI trainers perform a range of tasks to fine-tune large language models — from writing prompts and correcting outputs to ranking AI-generated answers. This pie chart illustrates the estimated breakdown of typical responsibilities across major platforms.
The AI Boom’s Invisible Workforce
Building an AI chatbot or image generator isn’t just about clever algorithms – it requires millions of human-labeled examples and constant human feedback. This work, known as data annotation, involves a range of tasks: writing or correcting model answers, labeling images or text, crafting prompts and responses, ranking outputs by quality, and flagging errors or toxic content.
“Workers complete tasks such as writing and coding, which tech companies then use to develop AI systems,” explains TIME, noting many AI models rely on supervised learning with labeled datatime.com.
Even cutting-edge “unsupervised” models often need a final human fine-tuning step. In other words, without legions of human annotators, large language models and generative AI would not achieve their impressive feats.
This work is usually outsourced to online platforms. Some, like Amazon Mechanical Turk or Upwork, operate openly. But many AI firms prefer stealth. Scale AI, for example, channels gig workers into Remotasks, a separate, worker-facing sitetime.com. Likewise, Surge AI reportedly runs Taskup.ai, DataAnnotation.tech, and Gethybrid.io as its crowd platformstime.com.
“Companies say secrecy is to protect sensitive R&D,” notes researcher Milagros Miceli, “but they also prefer secrecy because it reduces the chances they will be linked to potentially exploitative conditions.”
In practice, this means tech giants like OpenAI, Meta, Google, Anthropic, and Microsoft have an opaque supply chain of human labor labeling data behind their AI – often in far-flung countries with cheap labor and scant oversight.
The people doing this work are typically hired as independent contractors, paid by task or hour with no benefits or job security. Many are educated professionals or domain experts drawn by the promise of remote, flexible work in their field of knowledge.
“Outlier is a platform where experts in various fields…help build the world’s most advanced AI,” its site proclaims. Indeed, platforms seek out linguists, coders, writers, physicians, you name it – anyone who can help train AI in specialized areas.
In theory, it’s a new kind of high-skilled gig work. In reality, contributors often find themselves performing repetitive, rigidly controlled micro-tasks under intense surveillance. As we’ll see, the same technology they help improve is used to monitor and judge their every move.
This chart compares estimated onboarding hours versus actual compensated hours during the first month across Outlier.ai, DataAnnotation.tech, Remotasks, and Appen.
Algorithmic Scoring: When AI Judges the Experts
One of the most common grievances across these platforms is the use of AI-driven or algorithmic scoring systems to evaluate contributor performance. Instead of consistent human oversight or feedback, workers are at the mercy of automated metrics that often misjudge quality and ignore human expertise.
On Outlier, for example, every task you submit is graded, and your “quality score” determines whether you can continue working.
“Instructions were extremely precise – a single misstep in formatting or failing to label an answer properly could lower my quality score,” wrote Shubhojeet Dey about his stint with Outlier.
If the score dips too low, *“I would be kicked off the project”*medium.com. Under such strict conditions, even highly knowledgeable contributors can be ousted over trivial issues.
A PhD-level contributor might craft a correct solution but get a low score for using an unexpected format or exceeding some arbitrary length. The system leaves no room for nuance or expert judgment – if you don’t match the hidden algorithmic criteria, you’re out.These scoring algorithms also create perverse incentives. Outlier tasks, for instance, impose tight time limits and then penalize workers for working “too slow.”
“Some tasks pay a ‘primary rate’ for a standard time limit. If you exceed that time, you earn a lower ‘secondary rate,’” Dey discoveredmedium.com. In practice, this means a project advertised at $35/hour might drop to effectively $20/hour if you take your time to ensure quality.
Rushing might keep your pay higher, but risks mistakes that tank your quality score – a Catch-22. Multiple reviewers have noted how Outlier’s pay system is tied to speed: “They want you to put in full-time hours… although they state ‘work when you want’,” one Glassdoor review observed, adding that “training is unpaid and often opaque.”
(In one case, a worker was even accused of using AI tools to do her task simply because she took an hour to carefully correct the model’s output, resulting in an automated 1/5 review and flag for supposed cheating.)
On Remotasks and Appen, algorithmic scoring is equally unforgiving. Appen’s search engine rating projects famously require raters to maintain an accuracy score (based on hidden gold-standard judgments); if your score falls below ~85%, you’re simply removed from the project – often without human appeal.
“Fired… no reason that I was told,” said one Appen web search rater, who called the company “a total scam and ripoff of your valuable time.”.
He had passed a difficult exam and worked diligently, only to be auto-terminated by the algorithm.
On Remotasks, too, low “accuracy” or client rating can trigger a suspension. Yet workers often receive little feedback on what they did “wrong,” making these systems feel arbitrary and unjust.
As one expert told TIME, these sites lean on “algorithmic management to keep their costs low,” which *“can result in the poor treatment that many workers experience.”
Errors or not, the platform can always find another contractor waiting in the wings.
Perhaps most emblematic is an Outlier project that tasked experts with “red-teaming” an AI – essentially trying to make the AI fail – under an impossible time crunch. Contributors were told to come up with prompts that expose the model’s weaknesses (e.g. producing harmful or nonsensical output) within just one hour.
According to multiple worker accounts, only a tiny fraction of participants managed to succeed; the rest “failed” the task and saw their quality ratings plummet. Ironically, those who did have deep expertise and a careful approach often refused to cut corners or submit half-baked prompts just to meet the timer – and were weeded out for it.
One Reddit contributor dryly observed that Outlier’s combination of tight time limits, constant retraining requirements, and pay rate cuts was “really becoming impractical”.
“Outlier is not the only AI training platform I am on, so I am well aware that a lot of these issues are just par for the course,” they added – a telling statement on industry-wide practices.
This chart contrasts advertised hourly pay with real-world effective rates after accounting for unpaid onboarding, task delays, and algorithmic deductions.
Silent Removals and Shadowbans
This horizontal chart estimates relative complaint volume for shadowbans and silent terminations by platform.
In a traditional job, if you underperform or violate a rule, you get warnings, perhaps a chance to correct course, or at least an explanation if you’re fired. On these AI gig platforms, workers often simply vanish from the system with zero explanation – a practice akin to shadowbanning or silent removal.
One day you’re working on tasks; the next day you’re locked out of your account or no new work appears, with no message from the company. Consider the experience of an educator on DataAnnotation.tech (a Surge AI site).
After diligently working on the platform and even being told they passed the initial assessment, they suddenly stopped receiving any tasks at all. Eventually they discovered their account had been deactivated – with $2,869 worth of completed work left unpaid.
“I emailed the companies’ support contacts, but did not hear back,” the worker reported in frustration. There was no explanation, no appeal; the door was just quietly shut.
Another contributor on a Surge-run platform recounted a similar story on Reddit: *“IME [In my experience] paid out $800 then my account disappeared with $2869 work sitting unapproved. Absolutely zero contact/reply from their support…”. Nearly $3k of his labor evaporated without a trace or response.
Outlier workers have faced the same. Multiple reviews on Glassdoor and Indeed describe accounts getting *“suspended”*or “disabled” out of the blue. One Indeed review titled “account banned for no reason” from April 2025 tells of a contributor who had “given my full patience to this platform” only to be accused by the system of “using third party software or automation tools”, resulting in instant account disablementindeed.com. “The fact is I never do that… I only use a laptop and a phone (for the camera),” the person lamentsindeed.com.
Similar stories abound on Reddit: “Out of the blue a week ago my account was suspended for violating their TOS with zero explanation. I’m always EXTREMELY careful not to do anything wrong,” wrote one Remotasks user.
Others were flagged for having “multiple accounts” or “misrepresentation” with no evidence provided.
This opaque ban hammer doesn’t just cut off future work – it often steals earned wages. A common allegation is that platforms intentionally suspend workers right before a payout is due.
“Scam company that will suspend your account so they don’t have to pay you for hours accrued,” one Glassdoor reviewer warned, claiming Outlier would “string [you] along” and then drop you.
On the Better Business Bureau, a November 2024 complaint from an Outlier contributor details how their account was suspended for alleged guideline violations (which they denied), and that Outlier refused to pay $525 owed for their services.
The user offered to provide proof of their innocence, “yet they want to take away my hustle,” they wrote, noting that even after “investigation,” Outlier support still wouldn’t release the funds. The BBB lists this case’s status as Unanswered, reflecting Outlier’s lack of response.
Even less dramatic scenarios feel like shadowbanning. Contributors often describe being “EQ’ed” (put in an endless Evaluation Queue) or simply not receiving new projects without explanation.
One day you have tasks; the next, the dashboard is blank. “Our system is unable to find a project for you… Please contact support,” read the message one Outlier freelancer saw after finishing a few tasks.
He reached out and *“now I am waiting to hear back… not holding my breath.”*glassdoor.co.nz In many cases, support never responds at all.
As one veteran worker observed, “there is no transparency about [gig] worker relations” – management simply does not communicateindeed.com.
This silent treatment extends to internal communication channels too. Outlier uses a private Discourse forum for project discussions; multiple workers reported being locked out of all forums and chats without notice after completing certain tasksbbb.orgbbb.org.
“Losing access to all communication channels and history… wiped clean without my consent,”one person described, saying projects and even direct messages vanished overnight.
It’s as if the companies want to erase any trace of the worker’s involvement the moment they decide to cut ties. The psychological toll of this sudden, silent ostracization is not trivial:
“This prolonged absence of communication has resulted in emotional distress that could easily be alleviated with a simple acknowledgment or update,” the BBB complainant wrote, “Instead, I am left without support or guidance.”
This chart highlights the estimated percentage of workers who report missing or delayed payments.
Unpaid Onboarding and Unrealistic Tests
Before a contributor ever earns a dime, they typically must clear a gauntlet of onboarding steps, exams, and training modules – almost always unpaid. The length and complexity of these qualifications have ballooned as companies try to ensure “quality” (or filter out people unwilling to work for pennies).
The result is that many workers invest hours or even days of labor up front with no guarantee of any pay at all.
A BBB complaint against Outlier from late 2024 illustrates this “work-for-free” onboarding in detail. “Since joining Outlier, I have dedicated hundreds of unpaid hours to onboarding processes, training tutorials, assessment tasks, and project-specific modules,” the contributor wrote.
For every project, Outlier required a lengthy sequence: hours-long tutorials and webinars, extremely long guideline documents to read, lengthy training videos, followed by quizzes and exams on all that materialbbb.org. The complainant counted “over 80 onboarding processes, tutorials, and assessment tasks” they had completed for various projects – all unpaid.
In some cases, the “assessment tasks” were indistinguishable from real paid tasks, yet were compensated at only “a fraction of the offered hourly rate.” (In other words, the platform got essentially free labor by having newcomers do actual work as their “test.”)
Worst of all, this user experienced projects that “disappeared entirely” after they finished all required onboarding, meaning all that effort was for nothingbbb.org. It’s hard to imagine a more blatant example of dangling a carrot and then snatching it away.
Outlier is not alone. Remotasks’ training center requires users to pass a series of courses and exams before paid work opens up. “To access a paying task, I first had to complete an associated (unpaid) intro course,” a journalist who signed up for Remotasks reported.
Every specialization – say, labeling autonomous vehicle sensor data – had its own multi-hour tutorial and test. If the project ended or you failed the exam, tough luck.
“Annotators spend hours reading instructions and completing unpaid trainings only to do a dozen tasks and then have the project end,”The Verge found, describing Remotasks’ feast-or-famine workflow.
One Kenyan Remotasks worker said he avoided certain tasks entirely because the training was long and the pay too low to be worth it.
Appen too requires unpaid training. Many Appen contractors recall spending 20+ hours reading dense guidelines and taking a tough exam (sometimes split into 3 parts over several days) for roles like search engine evaluator – all unpaid. Those who fail simply get a form email weeks later, if that. “If a user isn’t accepted… they typically don’t hear anything after completion of the assessment,” TIME notes as a common scenariotime.com. Even those who pass may sit idle for weeks before any paying task appears. “I was accepted… then my recruiter disappeared,” reads one Glassdoor review titled in frustration.
The pattern is clear: these platforms demand free labor up-front, calling it “assessment” or “training.” Some workers tolerate it hoping it pays off. Others call it out as a scam. “They seek highly educated people under false pretenses for unpaid and nonexistent work,” one angry reviewer wrote, saying Outlier “lies and gaslights you”through the process. Another simply states: *“Training is unpaid and doesn’t cover what you actually need to do.”*glassdoor.com – after all that prep, you’re still thrown into tasks that differ from what was taught.
In some cases, workers do everything right in onboarding and still see little or no reward. “I have been working for [DataAnnotation] a couple of months now and made a couple thousand bucks. They test and train all kinds of AI,” one Reddit user posted, *“It’s definitely not a scam!”. But others on DataAnnotation report a long wait after passing the test with no projects.
“I was told I passed the assessment, but then never got any tasks,” is a common refrain.
The luck of the draw can determine if you start earning or just end up in limbo.Even when onboarding leads to paid work, initial pay rates don’t hold. Several Outlier contributors describe a bait-and-switch after training.
“Before completing the onboarding process, I was promised $25/hour… after a few training modules (long, redundant, unhelpful), it showed I would be making $15/hr,” wrote one workerglassdoor.co.nz.
They worked nearly 2 hours and earned only $17 total – far below minimum wage when you include the training time. Then the system promptly ran out of tasks and locked them out with a tech support messageglassdoor.co.nz. Others mention being offered higher pay for specialized tasks (e.g. $40/hr for psychology content), only to rarely or never get those tasks, effectively putting them on $10–15/hr general duties insteadmedium.commedium.com. It’s as if the platform sets an “up to $X/hour” headline rate to attract skilled applicants, then funnels most people into far lower effective pay once they’ve sunk time into onboarding.
To make matters worse, some tasks have unrealistic time or difficulty constraints clearly designed to winnow out the workforce. The aforementioned Outlier “make the AI fail in one hour” mission is one example. Another is the Remotasks “Traffic Direction” project reported by The Verge, which required workers to interpret aerial images for self-driving car training – a notoriously complex task – yet paid only around $1–2 per hour in Kenya.
“Everyone knew to stay away from that one: too tricky, bad pay, not worth it,” one annotator said.
These companies seem to have no qualms about setting up challenges that only a minuscule fraction of workers can succeed at under the constraints given. The rest either fail (providing free data in the attempt) or quit in frustration – filtering out those who won’t endure near-impossible demands.
The same AI training tasks pay vastly different wages depending on the contributor’s location. This chart shows average hourly earnings across five regions, exposing the stark global wage gap.
Churn and Burn: Disposable Workers by Design
Behind all these practices is a mindset that treats contributors as disposable labor, fueling a constant churn of new sign-ups to replace those cast aside. Rather than cultivating a stable pool of experienced annotators, the platforms operate a high-turnover model more akin to a digital sweatshop assembly line.
The sheer scale of hiring is telling. Outlier, a startup barely known a year ago, had “nearly 5,000 available jobs on Glassdoor” earlier this year as it recruited en masseinc.com.
“They hire thousands to work on limited projects,” one Glassdoor review revealed bluntly.
Many of those projects end or dry up quickly, at which point those thousands of workers may be left with nothing – or find themselves axed over minor infractions as discussed. “The work is uneven… Training and feedback are a joke,” that same review added, highlighting chaotic management.
This churn is highly profitable for the platform owners. Scale AI (Remotasks/Outlier’s parent) and Surge AI (DataAnnotation’s parent) can brag about having huge on-demand workforces to win contracts from AI clients, yet keep individuals at arm’s length.
If a project needs 1,000 annotators for two weeks, they spin up 1,000 new “freelancers.” When it’s done, most will get no further work or will be trimmed down to a small core for maintenance – the rest effectively laid off (without ever being considered employees to begin with).
In fact, Scale AI and Outlier were hit with a class-action lawsuit alleging they illegally misclassified workers and laid off 500 people without notice or severance in August 2023 when projects slowed. Gig workers have no protections under labor law’s WARN Act, but plaintiffs argue they functioned as full-time employees in all but name. The geographic distribution of labor also encourages a “race to the bottom.”
These platforms recruit globally, finding ever-cheaper pools of labor. A LinkedIn post by Analytics India Magazine blew the whistle on how Outlier (Scale AI) was paying Indian engineers as little as $7.50 per hour while advertising $40/hr for U.S. workers doing the same job. Payment delays to non-US workers were common, it said, and called it *“the dark side of AI development.”
Scale AI’s billions in venture funding and lucrative contracts stand in jarring contrast to its gig workers in India, Kenya, the Philippines or Venezuela making a few dollars an hour labeling data.
The Register noted that by late 2022, Kenyan Remotasks workers saw their pay drop to just $1–3 per hour, even as U.S. annotators on similar tasks made $15–25. When workers in higher-paying regions push back or leave, the platform simply shifts more work to lower-cost regions. This constant churn and wage arbitrage keeps costs low – and workers perpetually insecure.
Performance surveillance further underscores the disposability. Every click, keystroke, and submission is tracked. The moment productivity dips or errors rise, the system flags you. Workers describe feeling like they’re always one mistake away from being removed, with no human manager to hear them out.
“People will literally put up with it because they know jobs are scarce,” one Reddit commenter noted.
“The crappy thing is [the platforms] know people don’t have a lot of job options right now and that’s why they likely do it.”
Fear and desperation become management tools in lieu of fair pay or support. Why invest in any single worker when another hundred are signing up tomorrow? As long as the AI companies keep needing more labeled data, the platforms will keep cycling through human labelers in an endless on-boarding, churn-out process. In the words of one disillusioned Glassdoor reviewer,
“I can’t find any hope for their future due to this poor… management.”
No Accountability:
Workers Left to Crowdsource Their SurvivalIf there is one theme that ties all these threads together, it is the complete lack of accountability and communication from the platforms to the workforce. Contributors find themselves in a feedback void: when things go wrong, the company is a brick wall. Outlier’s BBB profile shows multiple complaints marked “Unanswered” – the business simply never responded to the disputesbbb.org.
“Management, if it exists, does not communicate,”wrote an Indeed reviewer of DataAnnotation (or perhaps Appen), *“there is no transparency about… relations.”*
In practical terms, support tickets disappear into a black hole.
One BBB complainant waited five days, then two weeks with no response to urgent support inquiries about her locked account. “My inquiries seem to be ignored for unknown reasons,” she wrote, noting this was *“contrary to standard business practices.”*bbb.org That’s putting it mildly – in any normal job, being unable to log in and not getting help for weeks would be unthinkable.
Because official channels are unresponsive or unhelpful, workers are forced to crowdsource help and information.
The Outlier AI subreddit (r/outlier_ai) has over 3,600 members who regularly share tips, complain about issues and alert each other to red flagsinc.cominc.com.
“Difficulty in getting paid, arduous onboarding processes…and uncertainty regarding Outlier’s legitimacy,” are common topics, Inc. magazine observed of the subreddit discussionsinc.cominc.com.
On these forums and on platforms like Reddit’s r/WorkOnline or r/WFHJobs, you’ll find hundreds of threads about Remotasks suspensions, DataAnnotation account issues, Appen no-work dilemmas, and more.
Often, the only “support reps” who answer are fellow workers who may have figured out a workaround or just offer commiseration. In some cases, exasperated workers turn to Twitter/X or LinkedIn posts to publicly shame the company, hoping to get a response.
(Outlier’s own Twitter account is empty, but replies to its tweets show users asking why they were banned or unpaid – with no reply). Even the Better Business Bureau and media inquiries sometimes get no response – Surge AI declined to comment to TIME.com, and Outlier/Scale AI gave only generic statements when asked.
This lack of communication extends to the work itself. Because of NDAs and secrecy, contributors often don’t even know which company’s AI product they are building – they are given code names and siloed tasks.
And they’re told not to tell anyone about their work. This isolates workers and makes collective action or even sharing “best practices” difficult (which, of course, benefits the platforms).
Where do we want AI to be in our lives?
In the absence of official information, rumor fills the void. Workers speculate on why a project ended or why a ban happened, trading anecdotes on Discord or WhatsApp groups. (Remotasks workers in Nairobi coordinate via unofficial WhatsApp groups to alert each other when a good-paying task becomes available.)
It’s an almost Kafkaesque environment where the true intentions and operations of the employer are obscured, leaving workers to guess at how to stay in good standing.
All of this serves to minimize the platforms’ accountability. If no one can pin down why they were fired or who made the call, it’s hard to hold the company responsible. If workers are scattered globally and can only share stories on Reddit, it’s hard to organize or demand better conditions.
And the platforms’ clients – the big tech companies – maintain plausible deniability about the labor issues, since they contract the work out to these middleman services.
As Privacy International noted, the vast scale of labeled data needed for AI has led companies to spread the work across many countries and contractors, “with cheaper and more [workers]” in each, making abuses harder to track. The result is a diffuse, atomized workforce with no leverage, at the mercy of each platform’s internal algorithms and policies.
From the perspective of Silicon Valley, this system has been a huge success: AI models get better and better, costs stay low and labor issues are out of sight, behind layers of outsourcing. But for the people on the other side of the screen, these AI training platforms are a new kind of digital sweatshop – one that harnesses human intelligence while denying human dignity.
The stories of Outlier, DataAnnotation.tech, Remotasks, Appen and others show a pattern of luring skilled workers into the AI boom with promises of high pay and flexibility, only to subject them to old-fashioned exploitation: unpaid work, erratic pay, constant surveillance, sudden terminations and zero voice or recourse.
It’s bitterly ironic that the same companies touting “AI for good” and revolutionary tech are, behind closed doors, running a playbook of labor practices worthy of the 19th century. The very algorithms these workers help refine are being used to manage and discard them. And the “AI boom” that promises to augment human potential is being built on a foundation of treating human workers as machine-like inputs – easily switched on and off.
The hidden exploitation of AI’s gig platforms calls into question the true cost of our smart new world. As we marvel at chatbots that can write poetry or cars that can (almost) drive themselves, it’s worth remembering the real intelligence behind these feats: the thousands of human contributors, clicking for hours in anonymity, uncredited and underpaid.
Bringing this labor out of the shadows is the first step towards reform. Regulators and AI companies need to acknowledge that quality AI isn’t possible without quality treatment of the people training it. Some are calling for transparency reports and labor standards for data annotation, just as there are for supply chains in manufacturingtheregister.comtheregister.com. Others advocate collective organizing of gig annotators across platforms.
At a minimum, these workers deserve fair pay for all hours worked (including training), clear communication and due process, and mental health support for the disturbing content some must handle. The alternative is to continue down the path we’re on – where the human cost of AI remains hidden in a cloud of NDAs and algorithms, and the people best equipped to improve AI are driven away by poor conditions. The next time you see an AI in action, think of the people in the shadows.
The story author has chosen to stay anonymous.
Loved our investigative piece? Subscribe, and share and spread the word!
You can also donate $10 to us on Paypal here – https://py.pl/MQRNd
Or, just scan below.
Sources:
Effie Webb, “How do you stop AI from spreading abuse? Leaked docs show how humans are paid to write it first.” Business Insider, Apr. 4, 2025businessinsider.combusinessinsider.com.
Shubhojeet Dey, “Why I Left Outlier AI for a More Reliable Side Hustle.” Medium, Mar. 2025medium.commedium.com.
Inc. Magazine – Sam Blum, “’It’s a Scam.’ Accusations of Mass Non-Payment Grow Against Scale AI’s Subsidiary, Outlier AI,” Aug. 22, 2023inc.cominc.com.
TIME – Nik Popli, “Is Data Annotation Legit? What to Know About the Tech Jobs Fueling AI,” Jul. 18, 2023time.comtime.com.
BBB Complaint by Outlier AI Contributor, Nov. 22, 2024bbb.orgbbb.org.
BBB Complaint by Outlier AI Contributor, Nov. 16, 2024bbb.orgbbb.org.
Analytics India Magazine (via LinkedIn post), “Outlier, a Scale AI subsidiary, under fire for exploiting workers…”, July 2024linkedin.com.
Reddit – r/WorkOnline thread, user account of Surge AI platform (“Taskup”) disappearance, June 2023time.com.
Reddit – r/outlier_ai, user Ronxjames comment on Outlier issues, late 2024reddit.com.