Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Why Corporate AI Projects Succeed or Fail | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
news

Why Corporate AI Projects Succeed or Fail

Date
February 18, 2025
Topics
Economy, Markets
Machine Learning
board room

Stanford researchers uncover the key factors behind successful AI development in the workplace.

It’s a mad competition for AI talent. Job postings for machine learning and AI engineers increased 70-80% in the beginning of 2024 compared with 2023. Companies are offering new recruits substantial compensation and big budgets to poke around their internal operations, interact with employees across the business, find problems or inefficiencies, and then build AI-based solutions to address them.

“But in-house developments don’t seem to be working well: Even as companies invest a lot of money, a lot of projects are failing or not delivering their promised value,” says Arvind Karunakaran, an assistant professor of engineering at Stanford and a faculty affiliate at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). “Something is going on in these very early stages of interaction between developers and other employees across the business that’s leading to these shortcomings.”

Between October 2019 and December 2023, Karunakaran and two colleagues — HAI postdoctoral fellow Luca Vendraminelli and Stanford doctoral student Devesh Narayanan — embedded themselves with software developers at a multinational fashion company and followed the ups and downs of various AI-based projects. From these four years of observation, they distilled in this working paper three key variables that feed heavily into the way developers interact with other employees of the company and, in turn, craft successful products.

Determining clarity, centrality, and homogeneity

Importantly, AI developers don’t have formal authority over employees in other divisions of a company. If they want to build an AI tool related to procurement, for instance, they need the cooperation of people who work there in order to learn about the job and what challenges AI could solve.

The researchers watched this process of expertise elicitation unfold many times and ultimately focused on two projects that epitomized how and why projects either succeed or fail. Project 1, which succeeded, was an AI tool to improve the ways in which products are distributed across the supply chain. Project 2, which failed, was an AI tool optimizing “retail productivity,” defined by the ratio of sales-to-payroll costs at retail stores.

Importantly, the team members were the same in both cases, and project 1 unfolded prior to project 2, so the findings were not the result of distinct features of different teams or of learning over time.

These were the three defining features differentiating project 1 from project 2:

  1. Jurisdictional clarity: There was a well-defined group of roughly ten allocation specialists who all reported to the same boss and had clear jurisdiction over decisions in project 1. The software developers knew whom to speak with and could readily reach them. In contrast, project 2 involved nearly 200 store managers who reported to different finance and district managers. There was no obvious group from which to elicit expert advice in this case.

  2. Task centrality: In project 1, the efficient allocation of products to retail stores was viewed as a central responsibility among the allocation specialists, so they were willing to invest time in assisting the development of an AI tool; it would clearly help them in their day-to-day work. For the store managers in project 2, the developers’ focus — retail productivity — was often considered peripheral to the actual effective management of a store and so managers had little invested in the tool’s development.

  3. Task enactment: In project 1, the task of allocation was essentially the same for everyone. In project 2, however, each manager was in charge of a unique property with distinct consumer and employee demands. This made the development of a one-size-fits-all AI tool nearly impossible.

The interaction of these three variables determined how well developers were able to gather information and, ultimately, design a successful tool.

Increasing the chance of success

The implications for managers are threefold, Karunakaran says. “First, they need to mandate that AI is important. If they hire a bunch of developers and task them with talking to people but don’t empower them with anything other than money, then they're setting the developers up for failure,”  he says. Managers need to set clear guidelines, even on minor concerns: Domain experts must respond to emails from AI developers within a given window, for instance. “Developers should go into the field with the right sponsorship and ensured access to experts.”

Second, and related, companies may want to create a new position of go-between, a person who facilitates the movement of information between experts throughout the company and developers working on new AI tools.

Finally, if particular projects are spinning their wheels, managers should step in to help refocus or scale down the project scope. In this case, project 2 included a complex web of individuals and stores for which one AI tool could never meet all needs. A more successful approach may have been to step back and, for instance, target the 10 or 20 worst-performing stores.

The researchers note that developers, too, can be proactive in how they approach new projects. As they begin to meet with domain experts and gather information, they ought to suss out the nature of the project at hand. Is the jurisdiction clear and is the task central to those with whom they’re talking? Is the work performed similarly each time and by everyone involved? If so, the information they need should be relatively easy to acquire; if not, perhaps a recalibration is in order.

“The second instance would suggest you’re in more of a project-2 landscape and things might derail quite quickly,” Vendraminelli says. “Beyond coding, sometimes developers need to work a bit like sociologists, to get an understanding of what they might need to build effective AI tools.”

Share
Link copied to clipboard!
Contributor(s)
Dylan Walsh

Related News

What Davos Said About AI This Year
Shana Lynch
Jan 28, 2026
News
James Landay and Vanessa Parli

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.

News
James Landay and Vanessa Parli

What Davos Said About AI This Year

Shana Lynch
Economy, MarketsJan 28

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Stanford’s Yejin Choi & Axios’ Ina Fried
Axios
Jan 19, 2026
Media Mention

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Media Mention
Your browser does not support the video tag.

Stanford’s Yejin Choi & Axios’ Ina Fried

Axios
Energy, EnvironmentMachine LearningGenerative AIEthics, Equity, InclusionJan 19

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.