Tech companies are paying humans to teach AI how to smile, talk, and emote—but it’s hard not to wonder what we’re really uploading.

There’s a weird new job on the market that pays $50 an hour to make small talk and smile into a camera. No, it’s not a dating app gone dystopian—it’s the next phase of AI development.

Major tech companies are quietly hiring regular people to record themselves displaying various emotions—happy, sad, surprised, annoyed—so that AI avatars can learn how to mimic human expression. You chat, you emote, you get paid. It’s like community theater meets Silicon Valley. And it’s lucrative… especially for a gig that mostly involves pretending to be fascinated by imaginary conversations.

The ultimate goal? Hyperrealistic AI avatars that can sell, soothe, coach, or even just keep you company—with faces that feel real. But while the internet’s hyped about the tech, some of us are side-eyeing the idea of feeding future AIs the very things that make us human.

The Most Expensive Smile on the Internet

Let’s be honest: a $50/hour side hustle to look happy on camera is tempting, especially in a world where most jobs barely want you to smile and pay you. But think about it. These recordings are meant to train AI tools that could eventually take over everything from customer service reps to mental health chatbots to romantic companions (yes, really).

In other words: you’re being paid to help machines learn to act more like you… so they can someday replace the need for you. Fun!

FaceTime Before FaceTime

Imagine this gig existed in, say, the late ’90s. Picture someone in 1997—baggy cargo pants, frosted tips, a full AIM away message life—getting paid to sit in front of a bulky webcam and record their best “genuine interest” face while talking about their Tamagotchi. Would the expressions have been more honest? Would we have smiled more? Talked more freely?

Back then, emotions weren’t filtered through 14 takes and a ring light. Facial expressions just happened—around dinner tables, at mall food courts, in the backseat of your friend’s mom’s minivan. You didn’t need a script or a paycheck to be expressive. You just… existed. And ironically, that made us better at being human.

Sad Face Upload

Now here’s the glitch in the matrix: we’re in an era that needs paid actors to teach machines how to look like they care—while we, the actual humans, are increasingly struggling with connection, communication, and depression.

Rates of anxiety and depression have skyrocketed since the rise of social media and algorithmic living. According to the CDC, depression among young adults nearly doubled between 2011 and 2023. People are lonelier, more isolated, and increasingly unsure how to navigate real emotions without curating them first. The most expressive we get lately is with a crying-laugh emoji or that one “skull” reaction to bad news.

So maybe these AI models are just catching up to the emotional fluency we’ve been steadily outsourcing since MySpace.

Real Emotions, Synthetic Faces

It’s a strange full-circle moment: we flattened our emotions to fit the grid, then trained machines to re-inflate them for us. We grew numb while teaching AI to simulate warmth. We forgot how to sit with awkwardness while coding bots to master empathy.

Of course, this new gig isn’t evil. It pays well. It’s weirdly fun. It’s probably helping some grad student make rent. But it’s also a little poetic that we’re trying so hard to make machines more “human” while we keep edging toward becoming a little more robotic ourselves.

So here’s the twist. Were we really happier back then—or did we just not talk about our sadness out loud? Maybe AI won’t replace us… maybe it’ll help us finally confront the parts of ourselves we’ve been ignoring.