Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Translating Centralized AI Principles Into Localized Practice | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Translating Centralized AI Principles Into Localized Practice

Date
January 13, 2026
Topics
Ethics, Equity, Inclusion
Regulation, Policy, Governance
Pedestrians walk by a Louis Vuitton store

Scholars develop a framework in collaboration with luxury goods multinational LVMH that lays out how large companies can flexibly deploy principles on the responsible use of AI across business units worldwide.

The luxury goods giant LVMH operates 75 subsidiaries, called Maisons, across roughly a dozen sectors in 190 countries. The company employs 200,000 people. Not long ago, it issued a charter for the responsible use of AI (RAI) within the company, underscoring high-level principles including explainability, fairness, and privacy. But what happens when these founding principles get pushed out to such an expansive, fragmented network?

“This is a central question: How are these Maisons interpreting and executing the principles and to what extent is there consistency across such a diverse organization?” said Kiana Jafari, a postdoctoral scholar at the Stanford Intelligent Systems Laboratory. “Are there frameworks that don’t only respond to the challenge of doing this consistently but that might actually help companies become more productive?”

Jafari and a handful of other Stanford researchers collaborated with LVMH, an industry affiliate of the Stanford Institute for Human-Centered AI, to assess the process by which principles are disseminated and converted into practice in various Maisons. The work, which took place over one year, culminated in the development of the Adaptive RAI Governance (ARGO) framework, a structured but flexible tool designed to balance centralized coordination with local autonomy. The work was presented recently at the ACM Conference on Fairness, Accountability, and Transparency.

The project began with a combination of more than 50 in-person and written interviews along with extensive document review to understand how RAI policies in specific Maisons aligned with the overall charter. This preliminary work illuminated significant variation in the way business units interpreted and applied RAI principles and tools, in part because norms and regulations varied in different sectors and in different locales around the world.

The ARGO framework emerged in response to this variability, designed to avoid the bottlenecks of full centralization while maintaining the consistency that decentralized approaches lack. It defines three interdependent layers of operation in a company, which, working together, coordinate efforts around RAI:

1. Shared Foundation

This layer, per the following fundamentals, defines a minimum set of expectations across the organization while acknowledging local implementation will vary:

  • A shared charter or document outlining RAI principles;

  • Standard templates for documenting models and use cases;

  • A checklist or triage tool that identifies any high-risk applications;

  • Baseline legal and regulatory guidance — though regulations vary across jurisdictions, some standards can be set at the top level, including establishing the European Union’s data privacy law (GDPR) as the minimum privacy standard globally, even in jurisdictions with less stringent requirements;

  • A clear definition of the roles and responsibilities required for RAI oversight at both group and business unit levels.

2. Advisory and Tooling Layer

This level, which is a centralized group that interacts with subsidiaries, establishes minimum requirements and feedback processes by providing:

  • RAI toolkits for applications like explainability dashboards or fairness metrics libraries;

  • Training programs and playbooks for business units;

  • Lightweight reporting tools, such as standardized incident logs and monthly AI usage dashboards;

  • Centralization of common AI assets, including best RAI practices that can serve as models for implementation;

  • Feedback channels for business units to report whether assets are working particularly well or proving particularly problematic.

3. Local Implementation and Oversight

At the level of individual teams and business units, this layer explicitly designates local responsibilities by:

  • Applying tools and processes to their context;

  • Monitoring model behavior and risks;

  • Conducting internal reviews or self-assessments;

  • Reporting incidents or deviations.

Jafari noted that implementing these recommendations requires a range of investments. If a company doesn’t yet have a dedicated RAI officer, then one will likely need to be hired. This can be costly.

“But there are also plenty of things that can be achieved quickly and without much expense,” she said. “For instance, companies can set up regular meetings or forums where responsible AI officers and security and data governance folks can get together and exchange ideas. There is plenty of low-hanging fruit to improve these processes.”

Along with a description of the three layers above, the researchers and LVMH provided a handful of recommendations for implementing the ARGO framework, including the following:

  • Define a core set of minimum practices that all units must follow, regardless of risk level. Examples of these principles are mandatory human-in-the-loop review for customer-facing AI decisions, documented model purpose statements for all deployments, and quarterly bias audits for recommendation systems;

  • Invest in shared, reusable tools that make RAI efforts practical and efficient to implement in different contexts. These tools must be flexible to allow decentralized teams to select components based on their use case, maturity, and risk profile — things like pre-built fairness testing modules that can be adapted for different product categories;

  • Create lightweight feedback and escalation mechanisms for surfacing incidents, near misses, unexpected model behavior, or issues with group-level assets that arise at the business-unit level;

  • Prioritize visibility over control, ensuring the organization knows where AI is being used, how it is evaluated, and by whom while avoiding tightly prescriptive oversight;

  • Encourage shared learning, such as cross-unit peer reviews or communities of practice, to prevent duplication of effort and accelerate ethical innovation.

 “Our research shows that practical implementation often matters more than policy articulation,” said Mykel Kochenderfer, an associate professor at Stanford and one of the project’s principal investigators. “We hope this ARGO framework can contribute to closing the gap between RAI principles and practice.”

This research was supported by the HAI Industrial Affiliates program, of which LVMH is a member.

Share
Link copied to clipboard!
Contributor(s)
Dylan Walsh

Related News

AI For Good: What Does It Mean Today?
Forbes
Jan 23, 2026
Media Mention

HAI Co-Director James Landay urges people to think about what "AI for good" means today. He argues, "we need to move beyond just thinking about the user. We’ve got to think about broader communities who are impacted by AI systems if we actually want them to be good.”

Media Mention
Your browser does not support the video tag.

AI For Good: What Does It Mean Today?

Forbes
Ethics, Equity, InclusionJan 23

HAI Co-Director James Landay urges people to think about what "AI for good" means today. He argues, "we need to move beyond just thinking about the user. We’ve got to think about broader communities who are impacted by AI systems if we actually want them to be good.”

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Stanford’s Yejin Choi & Axios’ Ina Fried
Axios
Jan 19, 2026
Media Mention

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Media Mention
Your browser does not support the video tag.

Stanford’s Yejin Choi & Axios’ Ina Fried

Axios
Energy, EnvironmentMachine LearningGenerative AIEthics, Equity, InclusionJan 19

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.