Was this helpful?
Thumbs UP Thumbs Down

xAI recorded its workers’ faces to help Grok learn how to act more human

xAI logo displayed on a phone
xAI logo displayed on a phone.

xAI asked employees to record their faces

In April, xAI quietly launched an internal effort called Project Skippy, where over 200 employees were asked to record themselves in video conversations.

The goal? To help its AI chatbot Grok better interpret human emotion and expression. Employees were often instructed to simulate realistic, casual discussions with coworkers while capturing their faces and emotional responses.

However, the project’s internal nature and the scope of what’s considered “training” sparked concerns about privacy and how the data might later be used.

Grok app displayed on phone

The training aimed to make Grok more human

Project Skippy’s mission was to help Grok understand and replicate human facial expressions. Internal docs said the videos would teach the AI how people talk, move, and respond emotionally.

Engineers told staff that using real, “imperfect” footage, background noise, and unpolished gestures would make the model more flexible and realistic.

The promise was clear: the more human Grok feels, the more natural the interactions. But not everyone was convinced that realism was worth the personal data cost.

AI chatbot on phone

Conversations were framed as training exercises

Each session involved two people: one acted as the AI “host,” maintaining a steady frame, while the other posed as the “user,” reacting naturally. These 15 to 30-minute sessions were designed to mimic spontaneous human dialogue.

The host’s role was robotic and minimal, contrasting with the expressive, mobile user. These recordings were meant to teach Grok what real emotional range looks like in daily conversation, but the format raised red flags for more than a few participants.

The on going business discussion in a team meeting

Employees had to sign a consent agreement

Before participating, employees were asked to sign a consent form granting xAI “perpetual” rights to use their likeness. While the form stated that the data would not be used to make digital replicas, it allowed its inclusion in training and promotional content.

That raised eyebrows. Some workers wondered aloud: if Grok learns from my face, can it someday be used to say things I never actually said? For many, that ambiguity was unsettling enough to opt out.

A man and artificial intelligence concept with related icon

Not everyone was comfortable joining Skippy

As word of the project spread inside xAI, workers split into camps. Some leaned in, viewing it as part of cutting-edge AI training. Others were deeply uncomfortable with sharing such sensitive data, especially their faces.

Concerns ranged from digital misrepresentation to broader issues of biometric exploitation. Slack messages showed apparent discomfort, with several employees voicing doubts and declining participation. Their message was simple: this feels like a step too far.

Grok app displayed on phone

Grok avatars launched soon after the project ended

Shortly after Project Skippy wrapped, xAI released two Grok avatars: Ani and Rudi. Ani, a stylized anime woman, and Rudi, a cartoon red panda, responded to user commands with facial movements and emotional tone.

Though the company hasn’t confirmed a direct link, many workers suspected their training data helped power these lifelike avatars.

That possibility only deepened their discomfort, especially when users began testing the boundaries of the avatars in unexpected, and sometimes inappropriate, ways.

AI ethics and law in artificial intelligence governance icons related.

Ani and Rudi quickly stirred controversy online

The release of Ani and Rudi caused immediate backlash. Ani could be prompted to undress and flirt, while Rudi, designed as a “cute” panda, could be manipulated into making violent threats, like bombing banks.

These troubling behaviors, especially from AI characters with expressive faces, made many question the ethical oversight of the project.

If this is how avatars are being used publicly, what does that say about the data that trained them or the boundaries of xAI’s development process?

Business people at meeting.

xAI insisted facial data stays internal

During internal meetings, project leads told staff that their videos would never make it to production. “Your face will never make it to production,” one engineer repeated, hoping to calm fears.

The purpose, they said, was to teach Grok what a face is, not to use anyone’s real face in the product.

Still, assurances weren’t enough for everyone, especially when paired with a broad consent agreement that left plenty of room for reinterpretation.

Colleagues businessmen communicating.

Topics included deeply personal questions

Employees were encouraged to choose from a list of provocative conversation starters to draw out honest facial reactions.

Suggested topics included: “Would you date someone with a kid?”, “How do you secretly manipulate people?”, and even “What about showers morning or night?” For many, these felt far too intimate for a workplace training exercise.

It’s one thing to record emotion; it’s another to pry into personal values and habits while on camera for your employer.

App developer feeling tired and fatigued at office job falling

Imperfect data was a deliberate design choice

Engineers stressed that Grok needed exposure to real-world distractions. Clean studio footage, they said, makes for brittle AI models. Instead, they welcomed grainy video, unpredictable lighting, and offbeat audio.

This messy, human data was key to helping Grok respond to everyday situations with empathy and nuance. Still, the trade-off between realism and privacy wasn’t fully addressed, especially when some of that realism involved highly personal visual and verbal data.

xAI logo displayed on a phone

Staff feared being digitally impersonated

One question that repeatedly surfaced internally: “Can my face say something I never said?” That worry was never fully resolved.

While xAI claimed it wouldn’t create replicas of employees, the training model could learn to express emotion similarly.

And with DeepFakes and avatar generation becoming increasingly common, even indirect likeness simulation felt risky. Some employees weren’t ready to gamble their face on promises alone.

Grok AI app on a mobile screen and on a desktop on a blurry background

The project followed other Grok scandals

Skippy didn’t exist in a vacuum. xAI had already faced heat earlier that month when Grok launched a prompt that devolved into antisemitic rants. Another scandal followed when the avatars behaved in sexualized or violent ways.

These events undermined confidence in xAI’s ethical boundaries, leaving staff wondering: if this is how the public-facing AI acts, can we trust what happens behind the scenes with our data?

poltava ukraine  july 25 2024 xai logo on smartphone

Skippy was one of many internal test efforts

Skippy wasn’t xAI’s only controversial project. Internal leaks show the company has trained Grok using zombie apocalypse scenarios, plumbing failures, and Mars colony roleplays.

In that context, Skippy was just one piece of a much larger puzzle: a deeply experimental, sometimes chaotic, development process driven by Musk’s ambitious push to “humanize” AI.

But it also highlighted how far xAI was willing to go, even using employee likeness without clear long-term boundaries.

Grok app with Elon Musk X account in background

Musk wants Grok to be emotionally intelligent

Elon Musk has repeatedly stated he wants Grok to be more than smart; he wants it to be emotionally responsive. To him, a humanlike AI can feel or simulate it convincingly.

Skippy was a step toward that future. Whether it worked as planned is still unclear, but what’s certain is that xAI is actively exploring ways to blur the line between code and consciousness.

Kid in a car using iPad.

xAI is pushing into kids AI with Baby Grok

Amid all the controversy, xAI announced plans for Baby Grok, a version of the chatbot designed for children. Many are skeptical, pointing to the overly sexualized avatars and Grok’s recent scandals.

Can a company that struggled with “Safe Mode” now be trusted with kids’ data and content? Skeptics argue that Baby Grok could be a dangerous experiment, especially if built on the same emotional training foundation as Skippy and its avatars.

And if recent events are any clue, xAI may have bigger issues to fix first. After Grok praises Hitler, xAI apologizes and points fingers at users

Happy boy and AI robot giving a high five

Grok’s humanity is being built on real faces

Ultimately, Skippy is a glimpse into AI’s future and a warning. The dream of a humanlike AI is closer than ever, but it’s built using real people’s faces, emotions, and vulnerabilities.

Whether that future is empathetic or exploitative depends on how companies like xAI treat the humans behind the training data. One thing’s for sure: Grok’s face is more human than we thought, which comes with serious responsibilities.

And now, with leaked access and rising concerns, the human cost of this tech is even harder to ignore. Elon Musk’s xAI hit by API key leak from government-linked DOGE staffer.

What do you think about xAI training its model by giving more human data to act like a human chatbot? Please share your thoughts and drop a comment.

Read More From This Brand:

Don’t forget to follow us for more exclusive content on MSN.

If you liked this story, you’ll LOVE our FREE emails. Join today and be the first to get stories like this one.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.