Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Using LLMs To Improve Workplace Social Skills | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

news

Using LLMs To Improve Workplace Social Skills

Date
April 20, 2026
Topics
Education, Skills
Generative AI
Healthcare
A woman takes notes while working on a tablet

Practicing specific social skills with AI chatbots helps users build confidence and competence.

Strong social skills are essential for counselors, teachers, psychotherapists, and caregivers and an asset for anyone in the workplace navigating conversations with bosses, direct reports, co-workers, or clients.

Despite the importance of social skills, most of us aren’t trained in actively listening to what people are saying or in showing empathy, communicating effectively, and resolving conflicts peacefully.

Even when people are taught these skills, there’s rarely enough opportunity to practice and receive helpful feedback, says Diyi Yang, assistant professor of computer science at Stanford.

To fill the gap, Yang and her team are leveraging the role-playing abilities of large language models (LLMs)—as a practice partner with a persona tailored to a specific social context and as an expert mentor that gives feedback on how users might improve their skills.

With seed grant support from the Stanford Institute for Human-Centered AI, Yang’s team has developed AI Partner/AI Mentor (AP/AM) for practicing conflict resolution skills (a system called Rehearsal), basic peer-counseling skills, and, most recently, novice therapy skills (a system called CARE). Similar practice systems could be developed to help people become more skilled teachers, caregivers, human-resource managers, or health-care workers, she says.

The team’s work shows that practicing with an LLM partner definitely matters. “It helps people build confidence in their abilities,” says Ryan Louie, a postdoc in computer science at Stanford and first author on the CARE project. “And feedback matters for building competence in specific skills.”

Using LLMs to help humans be better humans is a great niche to be in, Louie says. “These AIs have enough capability to be helpful while functioning safely in this arena of helping the helpers.”

Creating AI Practice Partners

In creating CARE, Yang’s team worked with experienced therapists to design 25 different practice-partner personas, each confronting a specific problem: a lonely 35-year-old male who is estranged from his family and concerned about the upcoming holidays, for example, or a teenager whose family dynamics favor a sibling, causing depression and an inability to enjoy life.

“The personas aren’t specific people,” Yang says, “but they present realistic scenarios that a counselor would potentially come across in the real world.”

But designing a helpful practice partner requires more than a simple persona prompt. “Out of the box, LLMs don’t know how to behave in a way that will allow a person to learn specific social skills,” Louie says. They don’t act like a person receiving therapy, may hallucinate bizarre facts, and don’t instinctively respond in ways that offer opportunities to learn. For example, a user won’t learn how to resolve conflicts if the practice partner is too cooperative and sycophantic (common traits of LLMs). And a novice therapist won’t have an opportunity to practice open-ended questions if partners immediately disclose all their concerns.

To address these problems, the researchers on Yang’s team worked with domain experts from relevant fields, such as conflict resolution or psychotherapy, to co-design constitutions (sets of rules and criteria) to prompt the personas to behave appropriately.

“We had to find the 'Goldilocks' zone,” Louie says. For example, the researchers had to design rules that reflect how mental health patients actually behave, such as “show initial skepticism about seeking help,” “don’t disclose too much at the start,” or “be resistant or hesitant about accepting suggested solutions.” And the constitutions for different personas differed as well, ensuring users had an opportunity to practice a variety of skills in a variety of contexts.

Even with all these constraints, there was a need for the model to check itself before giving an output, Louie says. “The system will check each of its responses against all of the rules in the constitution and if one of them is violated, it will produce a different response.”

Developing LLM Mentors

Practice partners are only one half of the AP/AM framework.

To develop an AI mentor that gives appropriate feedback, the team started with an evidence-based framework for communication that combined helping skills and motivational interviewing. Then therapist supervisors from the Stanford School of Medicine used that framework to review and annotate emotional support conversation transcripts. The research team fine-tuned the model based on those supervisors’ critiques. And to minimize the risk of low-quality feedback, the team took the extra step of having the model improve itself by generating several alternative responses, selecting the most effective ones, and further optimizing itself with the highest-scoring options, Louie says.

The result is an AI mentor that gives feedback about a user’s strengths and areas for improvement and also suggests alternative ways to respond that mirror how experienced counselors would provide suggestions to novices.

A 90-person randomized controlled trial of CARE compared groups that did LLM practice sessions with and without feedback. The results showed that practice alone matters for building confidence even in the control group. But feedback is needed for improving skills, Louie says. For example, the practice-and-feedback group was more client-centered and showed increased empathy, while the practice-only group was more interested in suggesting solutions to clients’ problems rather than helping them come up with their own.

“Feedback gives counselors examples of how they can show more empathy, which helps them understand ways to make a client feel more supported,” Louie notes.

Future Work

Going forward, the big challenge will be finding a way to quickly and responsibly create practice simulations for new use cases. Right now, it takes a lot of careful work and expert involvement to create partners and mentors that can teach the desired skills, Louie says.

Currently, the team is adapting the CARE system for use by community mental health centers that train their own counselors, as well as launching a collaboration in India that will require adapting the AI partners and mentors to a different language and cultural context.

One area for future work could also be personalization. For that, these tools need the  right zone of difficulty for each user, Louie says. “We don't want to have users practice a skill they don’t even know about or give them feedback that is developmentally too challenging.”

 It’s also important that AI partners and mentors not replace opportunities to practice with peers or receive feedback from human supervisors. “We need to find ways to complement existing training in ways that make sense,” Louie notes.

That said, these systems could help many important but under-resourced organizations—from nonprofits and peer-counseling programs to community mediators and therapists—where training is limited. “I’d like to address training bottlenecks in community-based contexts where the volunteers or non-specialist providers don't get enough training or could use continued training to make them more effective,” Louie says. “If we can increase the capability of humans to support each other, then we’re all better off.”

Share
Link copied to clipboard!
Contributor(s)
Katharine Miller

Related News

Collaborative Coding, Better Scaling, Health Tracking: HAI Awards $2.17M to Innovative Research
Nikki Goth Itoi
Apr 29, 2026
Announcement
Your browser does not support the video tag.

Seed grants will fund 29 research teams pursuing novel research ideas across disciplines.

Announcement
Your browser does not support the video tag.

Collaborative Coding, Better Scaling, Health Tracking: HAI Awards $2.17M to Innovative Research

Nikki Goth Itoi
HealthcareSciences (Social, Health, Biological, Physical)Apr 29

Seed grants will fund 29 research teams pursuing novel research ideas across disciplines.

An AI Health Coach Could Change Your Mindset
Katharine Miller
Apr 23, 2026
News
A runner with a smartphone laces her shoes

Bloom, a health coaching app created by Stanford researchers, helps people tap into their own motivations.

News
A runner with a smartphone laces her shoes

An AI Health Coach Could Change Your Mindset

Katharine Miller
HealthcareGenerative AIApr 23

Bloom, a health coaching app created by Stanford researchers, helps people tap into their own motivations.

AI’s ‘Delusional Spirals’ (and What to Do About Them)
Andrew Myers
Apr 20, 2026
News

In a world where chatbots can stand in for friends, counselors, and even lovers, the mental health risks are a growing concern.

News

AI’s ‘Delusional Spirals’ (and What to Do About Them)

Andrew Myers
HealthcareGenerative AIApr 20

In a world where chatbots can stand in for friends, counselors, and even lovers, the mental health risks are a growing concern.