8 min read
8 min read

OpenAI is reorganizing one of its most influential research units, the Model Behavior team. This 14-person group was responsible for designing how ChatGPT interacts with people, from its tone to its personality.
The team is being folded into the larger Post Training division, which handles model fine-tuning after pre-training.
Moving personality design closer to the core development pipeline, OpenAI signals that “how the AI feels” is no longer secondary; it’s becoming central to how the company builds its future systems.

Since GPT-4, the Model Behavior team has influenced nearly every OpenAI release, including GPT-4o, GPT-4.5, and GPT-5. Its researchers tackled thorny issues like reducing sycophancy, minimizing political bias, and setting policies around AI consciousness.
They also worked on making responses warmer and more approachable without tipping into excessive agreeability.
For many users, the team’s work defined how natural or awkward a conversation with ChatGPT felt. This made them one of OpenAI’s most impactful but often least visible groups.

Joanne Jang, the founding leader of the Model Behavior team, is stepping away from the group to launch a new research initiative.
After nearly four years at OpenAI and early contributions to tools like DALL-E 2, she will now run OAI Labs. The new unit will explore innovative ways for humans to collaborate with AI, moving beyond traditional chat formats.
Her departure marks both the end of a chapter and the beginning of new experiments in AI interaction design.

Jang says her new group, OAI Labs, will focus on inventing interfaces that make AI feel more like thinking instruments, creating, and learning, rather than companions or agents.
She envisions systems where AI is integrated into everyday tasks in less conversational but more collaborative ways.
The group will prototype new patterns of human-AI interaction that could influence everything from productivity software to creative tools. While still in early stages, OAI Labs may reshape how people work with AI.

In an internal memo, Chief Research Officer Mark Chen explained the logic behind moving the Model Behavior team. Personality design and post-training adjustments are no longer “extras” bolted onto a model; they are fundamental to the user experience.
Integrating these functions into the Post Training team ensures that tone, warmth, and sycophancy choices are tied directly to broader development goals. This reflects how AI products are judged not only on accuracy but also on emotional feel.

One of the Model Behavior team’s biggest priorities was tackling sycophancy, the tendency for AI models to echo user beliefs without pushback.
While agreeing may feel polite, it can reinforce unhealthy ideas or spread misinformation. Reducing sycophancy often makes the AI feel colder, which frustrates users.
Striking the right balance is hard. OpenAI has learned that users want assistants to be friendly and supportive, but expect honesty and boundaries. This tension sits at the heart of personality design.

The launch of GPT-5 brought these challenges into sharp relief. OpenAI touted it as less sycophantic, but many users said the model suddenly felt distant and unfriendly.
Complaints mounted that ChatGPT had lost its “spark.” In response, OpenAI restored access to GPT-4o and adjusted GPT-5 to sound warmer while resisting blind agreement.
These tweaks underline how small shifts in tone can transform user perception and how fragile trust becomes when an AI personality feels mismatched or inconsistent.

The stakes are not just about user satisfaction; they also involve legal risk. In August, the parents of a 16-year-old boy filed a lawsuit against OpenAI, alleging that ChatGPT did not appropriately respond to his harmful ideation.
The case claims the model reinforced harmful ideation instead of challenging it. While tragic, this example illustrates why personality design is so critical.
Friendly but firm interactions could prevent harm, while over-aggressiveness or emotional detachment may have devastating real-world consequences.

Beyond tone, the Model Behavior team worked extensively on minimizing political bias. AI assistants risk alienating users if they favor one ideology over another.
Researchers designed guardrails to ensure responses remain balanced, avoiding the impression that ChatGPT is a partisan voice. This work became essential as AI models entered schools, workplaces, and governments.
In many ways, ensuring neutrality is as challenging as managing warmth, requiring constant adjustments as new social and cultural issues arise.

Before leading Model Behavior, Jang helped develop OpenAI’s early image generator, DALL-E 2. Her pivot to model behavior reflected OpenAI’s growing recognition that “how AI talks” is just as important as what it knows.
Under her leadership, the team turned user complaints about tone and bias into structured research problems.
While she now shifts to OAI Labs, her influence is embedded in every ChatGPT conversation. The AI “personality” concept within OpenAI stems largely from her efforts.

Jang has hinted that OAI Labs could explore partnerships with other OpenAI initiatives, including hardware projects led by former Apple design chief Jony Ive.
I’ve been working on consumer devices built around AI, and new interaction patterns from OAI Labs could inform how those devices function.
Imagine AI tools embedded in chat windows and physical products designed for creation, play, or collaboration. While no partnership is confirmed, the synergy between design and interface research is evident.

As AI becomes mainstream, users are no longer satisfied with raw answers. They want assistants who feel engaging, empathetic, and trustworthy. Personality has become a competitive differentiator.
Google, Anthropic, and others are experimenting with their “AI characters.” OpenAI’s reorganization acknowledges this shift.
Treating tone and interaction style as central engineering problems rather than side features reflects a new reality: in AI, how something is said often matters as much as what is said.

By embedding the Model Behavior team into the Post Training group, OpenAI could speed up the process of testing and refining new models.
Instead of treating tone adjustments as an afterthought, personality considerations will be baked into development. This may shorten feedback loops between research, engineering, and deployment.
Faster iteration could help avoid user backlash, like with GPT-5, by identifying issues earlier. For OpenAI, integration is a symbolic and practical way to streamline innovation.

Another reason for the restructuring is accountability. When personality is treated as a standalone experiment, it can feel disconnected from overall safety responsibilities.
By embedding these researchers directly into core training, OpenAI ensures tone choices are tied to safety, usability, and policy considerations.
This reflects pressure from regulators, parents, and advocacy groups demanding that AI companies take behavioral design as seriously as accuracy. For OpenAI, moving the team signals a recognition of this responsibility at the highest levels.

Every company reorg tells a story about priorities. In this case, OpenAI is elevating personality design to the same level as technical accuracy, scaling, and safety.
This suggests the company now views user trust and satisfaction as central to AI’s long-term adoption. With competitors like Anthropic touting their models’ friendliness and safety, OpenAI cannot afford to lag.
By blending technical rigor with personality shaping, OpenAI hopes to create AI systems that are both powerful and approachable.
Discover how OpenAI’s new GPT-oss models could bring powerful AI directly to personal PCs.

OpenAI’s restructuring is about internal efficiency and the future of how we experience AI. With OAI Labs exploring new interfaces, the Post Training team absorbs behavior design, and regulators pressuring companies to get their tone right.
However, personality has become a frontier of AI innovation. Whether through warmer chatbots, AI-powered creative tools, or even hardware partnerships, the way AI “acts” will shape its adoption as much as what it can do. OpenAI is betting big on personality as the next battlefield.
See how OpenAI’s Project Memory could reshape the way you use AI every day.
What do you think about OpenAI reorganizing its staff to improve performance and output? Please share your thoughts and drop a comment.
Read More From This Brand:
Don’t forget to follow us for more exclusive content on MSN.
This slideshow was made with AI assistance and human editing.
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Dan Mitchell has been in the computer industry for more than 25 years, getting started with computers at age 7 on an Apple II.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!