Analytics Crime Economy USA

Inside the Underground Market for AI Training Logins

Inside the Underground Market for AI Training Logins
Getty Images
  • Published December 7, 2025

The original story by , , and for Business Insider.

There’s a new kind of black market growing alongside the AI boom — and it’s not about stolen models or secret code. It’s about logins.

As companies like Scale AI, Surge AI, Mercor and others race to hire tens of thousands of remote workers to help train chatbots, a shadow economy has sprung up around the accounts those workers need to get in the door.

Opportunists are now buying and selling “verified” access to these AI training platforms, letting people in barred regions — or people who failed the tests — sneak into projects they’re not supposed to touch.

An investigation found around 100 Facebook groups full of posts advertising this trade in AI training accounts. After the activity was flagged, Meta removed around 40 groups and pieces of content and says it’s still investigating. But the market hasn’t gone away — it’s just getting smarter.

Behind every “smart” chatbot sits an army of human contractors, often called ghost workers. They rate AI responses, write better ones, label images, translate text and more — usually through platforms like:

  • Outlier (run by Scale AI);
  • DataAnnotation.tech (used by Surge AI);
  • Other data-labeling and evaluation portals run for Big Tech clients.

Workers first have to create an account, pass screening tests, and then hope they get picked for projects. When they do, the work is flexible, remote and can sometimes pay north of $100 an hour for certain specialized tasks.

The catch? Work comes in waves, and it’s often limited to certain countries, languages or time zones. When projects dry up in one region but stay active in another, a predictable hustle follows: people start trying to “borrow” or buy their way into geography-restricted accounts.

Several contractors described the pattern:

  • Accounts with access are treated like assets.
    Sellers offer “verified” Outlier or Mercor logins that belong to users in countries like the US, where projects are still running.
  • Some of these accounts are real, some are fake.
    Either way, reselling is banned by the platforms — but that hasn’t stopped the trade.
  • Buyers hide where they are.
    People in countries with fewer opportunities, like Kenya, use VPNs or “shadow proxies” to route their internet through a device in the target country so the platform thinks they’re stateside.

Two former Outlier contractors in Kenya said they know people personally who’ve bought accounts this way — either because they couldn’t pass screening tests or because there simply weren’t any projects left in their region.

On top of that, a mini industry has formed around teaching people how to cheat the system. YouTube channels and Telegram groups sell:

  • “Guides” on bypassing geo-restrictions;
  • Answer sheets for onboarding tests;
  • Project-specific cheat sheets.

Some “account owners” don’t hand over full control but rent out their logins instead. In those setups, they:

  • Charge an upfront fee;
  • Take a cut of future earnings;
  • Or both.

One WhatsApp ad openly sought verified accounts for two major platforms — Outlier and Mercor — to buy and resell.

If this sounds like a playground for scammers, that’s because it is.

Both sides are nervous:

  • Buyers fear sending money and getting ghosted or receiving credentials for an account that doesn’t actually exist.
  • Sellers fear losing access to their own accounts or never seeing the promised revenue share.

Posts in Facebook groups are full of complaints from people who say they’ve been ripped off:
They wired money, then got blocked. Or they were given logins that didn’t work.

Two US-based contractors said they regularly get messages on Reddit from people offering them “fair payment” plus an ongoing cut if they’ll hand over access to their verified accounts. They’ve declined — both because platforms can ban them if caught, and because they would still be the ones responsible for taxes and any trouble tied to that account.

The big data-labeling firms are very aware this is happening.

  • Scale AI, Mercor, Prolific and Surge AI all say buying and selling accounts is explicitly prohibited.
  • They claim to use tools like device fingerprinting, IP monitoring and behavioral analysis to spot suspicious accounts.
  • Some companies even monitor social platforms for account-selling groups.

Meta confirmed that posts selling AI training accounts break its rules on fraud and scams, and said it has taken down dozens of groups and posts already.

“We use device, IP, and behavioral safeguards to identify and remove suspicious accounts before they can access any customer work,” a Scale AI spokesperson said.

But the black market is still visible — and evolving.

Internal Scale AI documents show just how messy this has become.

On one 2024 project for Google, thousands of contractors were flagged as “suspected spammers” or “cheaters.” Earlier spreadsheets, with titles like “Good and Bad Folks” and “suspicious non-US taskers,” tracked:

  • Contractors using VPNs;
  • People with multiple accounts;
  • Workers funneling payments through specific money apps.

In one late-2023 document listing 490 removed contractors, Scale cited:

  • 48 bans for using both a VPN and a USD-withdrawal payment app;
  • 70 bans for multiple accounts linked to the same person;
  • 11 bans for simply having two accounts;
  • 21 removals for consistently low-quality work.

Another internal tracker showed a separate project struggling with low-quality and copied answers. Managers discussed how to “be ahead of the spammers,” including:

  • Blocking copy-and-paste on certain tasks;
  • Banning contractors from specific countries (including Egypt, Kenya and Pakistan) from participating in some projects;
  • Filtering out workers suspected of using ChatGPT to generate answers.

Contractors in Kenya say that since late 2024, work opportunities in their region have fallen sharply — a drop they link directly to these anti-fraud crackdowns.

Where there’s a market, there’s phishing.

  • Some contractors have received fake “promotion” or “project invitation” emails asking them to enter their login details on bogus sites.
  • One user was banned after Scale discovered they were collecting other workers’ contact details and spamming them with account-buying offers.

So it’s not just about people voluntarily selling accounts — it’s also about criminals stealing real logins to resell them.

Data-labeling companies aren’t the only ones watching this grow.

Prolific’s VP of product, Sara Saab, said their own research shows this isn’t one big mastermind operation. Instead, it looks more like a network of separate, opportunistic groups — with a level of organization that now rivals things like bank fraud or ticket scalping.

“Technologies that are helping data labeling companies are also helping people with bad intentions, fraudsters and scammers,” she said, calling it an “accelerating arms race” that requires companies to stay proactive, not just reactive.

At first glance, this might sound like a niche hustle. But the stakes are bigger:

  • For workers, buying or renting an account can mean losing money, leaking passport or ID data, or getting banned from platforms they might one day need.
  • For clients, mislabeled or low-quality training data can damage their models — especially if the person doing the work isn’t who or where the platform thinks they are.
  • For the AI ecosystem, it shows how quickly the “ghost work” behind AI can be gamed once money and geography-based access limits collide.

As tech giants throw billions at training AI models, the invisible infrastructure beneath — the humans doing the labeling, and the accounts that gatekeep that work — is turning into its own contested territory.

The models are getting smarter. So are the scammers.

Wyoming Star Staff

Wyoming Star publishes letters, opinions, and tips submissions as a public service. The content does not necessarily reflect the opinions of Wyoming Star or its employees. Letters to the editor and tips can be submitted via email at our Contact Us section.