Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
When AI Writes Your Email | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

When AI Writes Your Email

Date
May 06, 2020
Topics
Arts, Humanities
Linda A. Cicero / Stanford News Service

Artificial intelligence is changing the way we communicate with each other, leading to questions of trust and bias.

Like Cyrano de Bergerac writing love letters on behalf of a friend, artificial intelligence (AI) tools are polishing our writing styles to make them more convincing, authoritative and trustworthy.

Spell check and autocorrect came first, fixing a few mistakes in our texts and emails. But now Gmail suggests entire sentences for us to use in emails or texts, while other AI tools polish our online profiles for Airbnb postings, job applications and dating websites. Down the line, AI systems might even send messages on our behalf with only minimal involvement on our part.

As the level of AI involvement in human-to-human communication grows, so too does the need for research into its impacts.

“There are interesting implications when AI starts playing a role in the most fundamental human business, which is communication,” says Jeff Hancock, the Harry and Norman Chandler Professor of Communication at Stanford University and founding director of the Stanford Social Media Lab.

In “AI-Mediated Communication: Definition, Research Agenda, and Ethical Considerations,” published in the Journal of Computer-Mediated Communication in January, Hancock and two Cornell colleagues reflect on what happens when AI tools come between people and act on their behalf, from how wording suggestions could alter our use of language and bake in bias, to the impact these communications could have on relationships and trust. 

Language Change at Scale

For several years, Gmail users have had access to an AI tool called “Smart Reply,” which offers three short email reply options for any email. For example, in response to an email proposing a meeting time, Gmail might suggest such replies as “Sounds good!”, “See you then!” or “Tuesday works for me!”

Recent research out of Cornell found that the language of Gmail Smart Reply tends to be overly positive rather than either neutral or negative (a result that was recently replicated by Stanford researchers). It’s even possible that the overly positive phrasing of Gmail Smart Reply primes recipients to respond in kind, with something like “Cool, it’s a plan!” even though they don’t know that AI is involved. Since tens of millions of messages are sent by Gmail users every day, this tendency could lead to language change at an unforeseen scale: Our language might evolve toward Google’s optimistic tone.  

“The simple bias of being positive has implications at Google scale,” Hancock says. “Maybe it’s no big deal and it’s what people would have said anyway, but we don’t know.” 

During the current shutdown due to the Covid-19 pandemic, Hancock wonders whether the use of Gmail Smart Reply tools will decline because their positivity seems less appropriate. “It’s not as common to say ‘See you later,’” he notes. “Now the common signoff is more likely ‘Stay safe’ or ‘Be well.’ Will AI pick up on that?” Yet another item on the research agenda. 

Built-In Bias

Natural language AI tools are typically built on a dataset consisting of a bucket of words and the various ways they have been assembled into sentences in the past. And these tools are designed to optimize for a specific goal, such as trustworthiness or authoritativeness. Both of these aspects of AI can build bias into an AI-mediated communication system. First, the word bucket might not include a diversity of communication styles. Second, the optimization step might promote the communication style used by the dominant group in the culture. “If AI is optimizing for sounding authoritative, then everyone will be made to sound like older white males,” Hancock says. 

For example, Hancock says, one can imagine that a young black woman might overcome the racial and gender biases she faces by writing a job application using an AI tool that optimizes for the authoritative communication style of an older white male. But this benefit might well come at some cost to her self-expression while also reinforcing the privileged status of the dominant group’s language usage.

Trust and Transparency

One recent study by Hancock and his colleagues hints at the issues of trust raised by AI-mediated communication. The researchers asked whether the belief that online Airbnb host profiles were written by AI affects how readers view them. Initially, readers trusted host profiles equally regardless of whether they were told they were written by humans or with AI assistance. But when readers were told that some were written by humans and others by AI, their level of trust fell for any profile that seemed formulaic or odd and therefore seemed more likely to have been written by AI.

“If people are uncertain and can’t tell if something is AI or human, then they are more suspicious,” Hancock says. This suggests that transparency (or lack of transparency) about the role that AI plays in our interpersonal communications may affect our relationships with one another.

Agency and Responsibility

Just as humans delegate agency to lawyers, accountants and business associates, humans are now delegating agency to AI communication assistants. But these agents’ responsibility for errors is unclear and their interests aren’t always fully aligned. “The agent is supposed to work on my behalf, but if it belongs to Google, does it have a separate interest adverse to my interest?” Hancock asks. 

In Cyrano de Bergerac, the articulate Cyrano wants to win Roxane for himself while writing love letters to her on behalf of a man who can barely put a sentence together. 

As AI systems become more sophisticated, will they step in as a personal Cyrano? And if so, will the responsible author be the human or AI? What happens if the human author behind an articulate message proves to be a fool?

Linda A. Cicero / Stanford News Service
Share
Link copied to clipboard!
Contributor(s)
Katharine Miller
Related
  • Adina Sterling: How will artificial intelligence change hiring?
    Bill Snyder
    Jan 06
    news
    Your browser does not support the video tag.

    New technologies help bring increased efficiency to the hiring process, but also pose significant challenges.

Related News

Exploring the Ethics of AI through Narrative
Dylan Walsh
Apr 03, 2025
News

A workshop at Stanford convened filmmakers and researchers to think about the implications of artificial intelligence.

News

Exploring the Ethics of AI through Narrative

Dylan Walsh
Arts, HumanitiesApr 03

A workshop at Stanford convened filmmakers and researchers to think about the implications of artificial intelligence.

Ge Wang: GenAI Art Is the Least Imaginative Use of AI Imaginable
Ge Wang
Jan 24, 2025
News

The prevailing public mindset that AI is only a labor-saving tool betrays a lack of understanding of why people create and a lack of imagination of this technology's potential.

News

Ge Wang: GenAI Art Is the Least Imaginative Use of AI Imaginable

Ge Wang
Arts, HumanitiesJan 24

The prevailing public mindset that AI is only a labor-saving tool betrays a lack of understanding of why people create and a lack of imagination of this technology's potential.

AI Brings New Potential to the Art of Theater
Beth Jensen
Jan 09, 2025
News

Stanford's Michael Rau combines human creativity and artificial intelligence to add new dimensions to storytelling and stagecraft.

News

AI Brings New Potential to the Art of Theater

Beth Jensen
Arts, HumanitiesJan 09

Stanford's Michael Rau combines human creativity and artificial intelligence to add new dimensions to storytelling and stagecraft.