Skip to main content Skip to secondary navigation
Page Content

Can Foundation Models Help Us Achieve “Perfect Secrecy”?

A new study explores how to apply machine learning to digital assistants in a way that could better protect our data.

Image
Smartphones light up faces of people in the dark

Many of us use smart assistants on our phones and in our homes. How can we be assured that our personal information is kept private when these models answer our questions? | DALL-E

Digital assistants of the future promise to make everyday life easier. We’ll be able to ask them to perform tasks like booking out-of-town business travel accommodations based on the contents of an email or answering open-ended questions that require a mixture of personal context and public knowledge. (“Is my blood pressure in the normal range for someone of my age?”) But before we can reach new levels of efficiency at work and at home, one big question needs to be solved:

How can we provide users with strong and transparent privacy guarantees over the underlying personal information that machine learning models use to arrive at these answers?

If we expect digital assistants to facilitate personal tasks that involve a mix of public and private data, we’ll need the technology to provide “perfect secrecy,” or the highest possible level of privacy, in certain situations. Until now, prior methods either have ignored the privacy question or provided weaker privacy guarantees.

Third-year Stanford computer science PhD student Simran Arora has been studying the intersection of machine learning (ML) and privacy with Associate Professor Christopher Ré as her advisor. Recently, they set out to investigate whether emerging foundation models — large ML models trained on massive amounts of public data — hold the answer to this urgent privacy question. The resulting paper was released in May 2022 on preprint service ArXiv, with a proposed framework and proof of concept for using ML in the context of personal tasks.

Read the full study, “Can foundation models help us achieve perfect secrecy?”

 

Perfect Secrecy Defined

 

According to Arora, a perfect secrecy guarantee satisfies two conditions. First, as users interact with the system, the probability that adversaries learn private information does not increase. Second, as multiple personal tasks are completed using the same private data, the probability of data being accidentally shared does not increase.

With this definition in mind, she has identified three criteria for evaluating a privacy system against the goal of perfect secrecy:

  1. Privacy: How well does the system prevent leakage of private data?
  2. Quality: How does the model perform a given task when perfect secrecy is guaranteed?
  3. Feasibility: Is the approach realistic in terms of time and costs incurred to run the model?

Today, state-of-the-art privacy systems use an approach called federated learning, which facilitates collective model training across multiple parties while preventing the exchange of raw data. In this method, the model is sent to each user and then returned to a central server with that user’s updates. Source data is never revealed to participants, in theory. But, unfortunately, other researchers have found it is possible for data to be recovered from an exposed model.

The popular technology used to improve the privacy guarantee of federated learning is called differential privacy, which is a statistical approach to safeguarding private information. This technology requires the implementor to set the privacy parameters, which govern a trade-off between the performance of the model and privacy of the information. It is difficult for practitioners to set these parameters in practice, and the trade-off between privacy and quality is not standardized by law. Although the chances of a breach may be very low, perfect secrecy isn’t guaranteed with a federated learning approach.

“Currently, the industry has adopted a focus on statistical reasoning,” Arora explains. “In other words, how likely is it that someone will discover my personal information? The differential privacy approach used in federated learning requires organizations to make judgment calls between utility and privacy. That’s not ideal.”

A New Approach with Foundation Models

When Arora saw how well foundation models like GPT-3 perform new tasks from simple commands, often without needing any additional training, she wondered if these capabilities could be applied to personal tasks while providing stronger privacy than the status quo.

“With these large language models, you can say ‘Tell me the sentiment of this review’ in natural language and the model outputs the answer — positive, negative, or neutral,” she says. “We can then use that same exact model without any upgrades to ask a new question with personal context, such as ‘Tell me the topic of this email.’ ”

Arora and Ré began to explore the possibility of using off-the-shelf public foundation models in a private user silo to perform personal tasks. They developed a simple framework called Foundation Model Controls for User Secrecy (FOCUS), which proposes using a unidirectional data flow architecture to accomplish personal tasks while maintaining privacy. The one-way aspect of the framework is key because it means in a scenario with different privacy scopes (i.e., a mix of public and private data) the public foundation model dataset is queried before the user’s private dataset, thus preventing leakage back into the public arena.

Testing the Theory

Arora and Ré evaluated the FOCUS framework against the criteria of privacy, quality, and feasibility. The results were encouraging for a proof of concept. FOCUS not only provides for personal data privacy but it also goes further to hide the actual task that the model was asked to perform as well as how the task was completed. Best of all, this approach would not require organizations to set privacy parameters that make trade-offs between utility and privacy.

Regarding quality, the foundation model approach rivaled federated learning on six out of seven standard benchmarks. However, it did underperform in two specific scenarios: when the model was asked to do an out-of-domain task (something not included in the training process) and when the task was run with small foundation models.

Finally, they considered feasibility of their framework compared with a federated learning approach. FOCUS eliminates the many rounds of communication among users that occur with federated learning and lets the pre-trained foundation model do the work faster through inference — making for a more efficient process.

Foundation Model Risks

Arora notes that several challenges must be addressed before foundation models could be widely used for personal tasks. For example, the decline in FOCUS performance when the model is asked to do an out-of-domain task is a concern, as is the slow runtime of the inference process with large models. For now, Arora recommends that the privacy community increasingly consider foundation models as a baseline and a tool when designing new privacy benchmarks and motivating the need for federated learning. Ultimately, the appropriate privacy approach depends on the user’s context.

Foundation models also introduce their own inherent risks. They are expensive to pretrain and they can hallucinate, or misclassify information when they are uncertain. There also is a fairness concern in that, so far, foundation models are available predominantly for resource-rich languages, so a public model may not exist for all personal settings.

Pre-existing data leaks are another complicating factor. “If foundation models are trained on web data that already contains leaked sensitive information, this raises an entirely new set of privacy concerns,” Arora acknowledges.

Looking ahead, she and her colleagues in the Hazy Research Lab at Stanford are investigating methods for prompting more reliable systems and enabling in-context behaviors with smaller foundation models, which are better suited for personal tasks on low-resource user devices.

Arora can envision a scenario, not too far off, where you’ll ask a digital assistant to book a flight based on an email that mentions scheduling a meeting with an out-of-town client. And the model will coordinate the travel logistics without revealing any details about the person or company you’re going to meet.

“It’s still early, but I hope the FOCUS framework and proof of concept will prompt further study of applying public foundation models to private tasks.”

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.

More News Topics