Skip to main content Skip to secondary navigation
Page Content

Better AI Data Rights

We propose novel “in-situ” data access rights, instead of data portability, to accelerate competition, innovation, and data use by bringing AI and ML algorithms to data instead of the reverse.

By Marshall Van Alstyne, Geoffrey Parker, Georgios Petropoulos and Bertin Martens

What is your policy proposal? Who defines data rights? How do data rights drive competition, growth, and innovation? US and EU legislation (e.g., CCPA, GDPR) seeks to empower individuals and boost competition via data portability rights. This is a partial step. By contrast, our proposal provides new and stronger data ownership rights with specific principles to help users capture value created from their data, while increasing privacy, competition, and innovation. We introduce a new “in-situ” right that allows businesses and individuals to police and act on their own data where it resides. In particular, we propose that data owners can authorize third parties to access live data on their behalf. This brings AI algorithms to data rather than data to algorithms, while solving major problems with data portability – obsolescence, moral hazard reporting, non-actionability, and security. It also facilitates competition among firms using AI to create network effects and increase innovation.

What problem does your proposal address? Our proposal solves (1) the competition problem that gatekeeper control over data forecloses market entry, reducing welfare, and (2) the efficiency problem that information asymmetry blocks 3rd party reuse, reducing innovation. Thus our proposal resolves the tension between data aggregation that yields AI efficiency and performance benefits and the data aggregation that yields market foreclosure and abuse of dominance. Finally, (3), our proposal makes it possible to evaluate algorithms for bias because they execute within one central infrastructure.

How does this policy proposal relate to artificial intelligence? Our proposal improves AI access, ethics and transparency, while allowing training on much larger datasets than portability. When users grant permission, “gatekeepers” must allow 3rd party access to data they hold on users’ behalf. This solves AI ethics and transparency problems by enabling inspection of the algorithm to detect bias or unscrupulous behavior. Under our proposal, users can also punish bad behavior or terminate access they no longer wish to grant. By contrast, data portability moves the data to the third party where users cannot be certain how their data are used or of its deletion upon request.