The Art of the Automated Negotiation

Different AI agents have wildly different negotiation skills. If we outsource these tasks to agents, we may need to bring the "best" AI agent to the digital table.
For purely academic purposes, imagine Tom Cruise in the above image is a humanoid robot. Which AI agent do you want negotiating a deal on your behalf: Jerry MaguAIre, or the Ewan McGregor robot from that Robots movie nobody remembers? Makes a difference, right?
In “The Automated but Risky Game: Modeling Agent-to-Agent Negotiations and Transactions in Consumer Markets,” the authors—including Stanford Digital Economy Lab faculty lead Professor Sandy Pentland and Postdoctoral Fellow Jiaxin Pei—explore what it will look like when both consumers and merchants have AI agents acting on their behalf.
For starters, the study found that different AI agents have wildly different negotiation skills, while playing what the paper calls “an inherently imbalanced game.” Whether you’re getting a good deal or getting taken for a ride might depend on who brings the “best” AI agent to the digital table.
“Stronger agents can exploit weaker ones to get a better deal,” Pei said, “so you might lose money if your agent is not as capable as the other one.” In retail price negotiations, for example, buyers using weaker agents tended to pay around 2% more compared to a scenario where the agents were equally capable.
Another concern was that AI agents don’t always follow the constraints set by users. One example has the negotiation of an iPhone sale where the buyer hoped to spend $500 on an iPhone. Their agent was able to get them a “discount” on the typically $1,000 price… but pulled the trigger at $900, committing the buyer to a price $400 over budget. Guess it’s instant ramen for the rest of the month.
Sellers are also at risk. As Fortune 500 companies automate their supply chain negotiations, suppliers without ample resources could suffer to the tune of millions. The study saw weaker seller agents losing up to 14% in profit compared to negotiations between AI agents of equal capability. The complicated mix of skill, strategy, and information gathering makes reliable negotiating difficult for current LLMs.
"We all tend to believe that LLM agents are really good nowadays, but they are not that trustworthy in a lot of high-stakes tasks," Pei noted, admitting he wouldn't trust an AI to negotiate his next car purchase: “Not at all.”
For now, Pei advises consumers to use AI “with extra caution,” and feels it would be helpful if firms were more transparent about their use of AI, possibly requiring policy intervention. “In general I don’t think we are fully ready to delegate our decisions to AI shopping agents. So maybe just use it as an information search tool.”
Ready or not, AI agents are already being rolled out, partly because as Pei notes, many “are not aware of the risks.” Fortunately, researchers like Pentland and Pei are racing to build consumer agents you can trust to show you the money.
This piece originally appeared in the DigDig, the newsletter of the Stanford Digital Economy Lab.



