Civil society

When your AI knows you better than anyone: Privacy in the age of intimate assistants

A very modern confidant

For generations, people found privacy by shutting the door, speaking in a low voice or keeping a journal in a personal drawer. Today, tens of millions of people share private information with AI assistants. A recent survey found that 60 per cent of US adults surveyed use both general AI chatbots and specialised AI tools in their daily lives. Today, people even rely on AI assistants for tasks once reserved for loved ones, therapists or doctors.

This intimacy is new – and it is increasing. While search engines have long answered discrete questions, AI assistants listen to the stories of our lives, connect the dots, and increasingly, draw upon previous interactions for further context. The result is a digital portrait richer than any search history: hopes, fears, moods, financial goals, medical goals and half-finished love letters.

As we share more of our digital ‘selves’ with AI assistants, we feel empowered. However, the depth of the data that we share creates new privacy and legal questions. How secure is our data? And who else can access – or demand access – to our data?

What we share

Unlike search engines and more traditional web interfaces, conversational AI assistants encourage oversharing. The convenience afforded by their simple and general-purpose interfaces can make it easy to overlook privacy protections like scrubbing names, account numbers and emotional context before hitting Enter.

When someone writes “I woke up anxious about my cardiology appointment,” or “help me negotiate my company’s lending term sheet” or “is my grandmother entitled to rent protection”, they share health information, trade secrets and family information, respectively. And each of those prompts is just the beginning of a conversation.

This sensitive information goes into cloud storage, model fine-tuning pipelines and even third-party plugins. Every such hop increases exposure risk.

Follow us on LinkedIn

First line of defence: security

AI platforms are a magnet for ‘prompt leaks.’ A recent study of 300 tools found that over 4% of prompts and 20% of files fed to chatbots contained confidential information. Attackers know that if they breach an AI assistant platform, they can gain access to everything its users chose to reveal.

However, in much of the world, AI prompts are treated like other cloud data. Policymakers can address this gap by encouraging or mandating strong encryption for conversation histories and standard data retention practices. For example, conversation history could be auto-deleted after 3 months.

Sophisticated businesses are already leading the way with enterprise-wide controls that prevent employees from inputting sensitive information into AI assistants. Policymakers can encourage businesses to develop cybersecurity frameworks that standardise and require such practices.

Health information and attorney-client records already enjoy special legal privileges and, consequently, special data handling requirements. Policymakers should explore extending similar privileges to conversations with AI assistants.

Second line of defence: Standardising lawful access

Even if a platform stays secure, different AI assistant platforms allow varying levels of internal access to conversation histories. Limited access is necessary to prevent AI assistants from being used for illegal purposes. Additionally, many AI assistant platforms aggregate trends on the kinds of conversations that people are having.

Rather than expect the users of AI assistants to review the fine print of how organisations might use their data, policymakers should put forth clear standards for them to abide by.

Additionally, in many jurisdictions, courts and authorities may be able to order AI assistant platforms to release specific conversations to assist with criminal investigations, civil discovery, employer audits or regulatory oversight. Significant precedent exists for courts and authorities demanding the release of e-mails, text messages and documents in cloud storage.

Policymakers should require AI providers to provide clear notice and publish transparency reports (e.g., how often internal conversation histories are accessed individually and in aggregate and how many external requests are received and complied with) so that users understand the risk. Here, policymakers will need to balance privacy requirements against judicial and law enforcement requirements.

Third line of defence: The digital estate dilemma

When a user becomes incapacitated, who can review their AI conversations? And when a user dies, who inherits their AI ‘memories’? In several recent cases, grieving family members have reviewed the AI conversation histories of their departed loved ones to foster better understanding. However, such histories may include sensitive or personal information related to other businesses or people, raising new data protection issues. And what if a family member wants to use the AI-generated voice of a departed loved one for personal or public-facing content?

Legal regimes are uneven or silent on many of these topics. Who can legally access the content of a person’s digital assets, such as e-mails and conversations with AI assistants, depends on the platform’s terms of service and the incapacity and succession rules of the jurisdiction.

As a practical matter, family members and friends may have access to an AI assistant’s conversation history when someone becomes incapacitated or passes away, regardless of the legal complexities. Conversely, even when platform rules or laws allow disclosure of such information, strong encryption can prevent access unless passwords are shared.

Policymakers can address these issues in three ways. First, they can clarify the roles of AI assistant history and digital likenesses in incapacity and succession rules. Second, they can participate in standardisation efforts across jurisdictions, given that data may be distributed across multiple regions. Third, they can consider adopting a set of default rules that overrule platforms’ terms of service.

The OECD AI Principles as a north star

As policymakers work to address these emerging issues related to conversations with AI assistants, the OECD AI Principles can serve as a north star. For example, human-centred values can guide requirements for platforms to give users easy access to privacy settings, the ability to opt out of data retention and the option to easily export conversation history. In the interests of transparency and explainability, policymakers should require platforms to disclose who can read stored conversation history and under what conditions. Robustness, security and safety considerations can inform which default encryption standards should apply to conversations with AI assistants.

AI assistant rules of thumb

As the policy process plays out, individuals should follow a few rules of thumb when using AI assistants. Stripping personal information from prompts or rephrasing them to avoid identifying them with specific people is an easy start. Becoming informed about the security and access rules that may apply to different platforms can also help individuals make informed decisions. In alignment with applicable laws, individuals should outline in estate planning documents which people they want to have access to their AI conversation histories, or if they want them deleted. If necessary, these wishes should be supplemented with clear processes for accessing passwords.

As with all relationships, trust is key

We have entered an era in which AI assistants can appear to know our hopes as well as our spouses and our financial goals better than our accountants. This has resulted in real benefits, such as personalised coaching and faster research in areas like finances and health. Yet this growing intimacy can also become dangerous, if it is not paired with strong privacy and consumer protection guardrails that address hacking, surveillance, access and ownership.

Building trust in tomorrow’s AI assistants will require shared responsibility: users empowered to practice data hygiene, AI assistant providers that bake privacy and consumer protection into their platforms and governments that modernise legal frameworks on an ongoing basis. If we succeed, the AI assistant will remain what we want it to be: a helpful friend who doesn’t pry and keeps our secrets safe.



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD, the GPAI or their member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.