Скопировано

The OECD has proposed new approaches to protecting personal data when using AI

26.08.2025 15:38:00
Дата публикации
AI assistants are increasingly becoming “digital confidantes” to whom users share information about their health, work, finances, and personal lives. Such a volume of candid information forms detailed digital portraits of people.

Unlike search engines, assistants continuously remember the context and connect disparate information into a coherent picture. This creates new risks for privacy and control over personal data.

As Var Shankar, founder of the Council on the Governance of AI and the author of the analysis for the Organization for Economic Cooperation and Development (OECD), notes, a significant part of this data is confidential and stored in the cloud, often passing through third-party services. Each “transfer” of information increases the risk of leakage.

A recent study of 300 chatbot tools and AI-enabled SaaS applications showed that more than 4% of requests (prompts) and about 20% of downloaded files contained sensitive information. This highlights the scale of the problem and the vulnerability of the systems.

Shankar suggests enshrining security standards at the legislative level: end-to-end encryption of chat history, automatic deletion of data after a set period, and a ban on unauthorized access by platform employees.

He also raises the issue of transparency: companies should publish reports on internal access to user data and external requests from government agencies so that people understand who has access to their correspondence and how.

A separate legal conflict is related to “digital legacy”: to whom and under what conditions can assistant data be transferred after the death or loss of legal capacity of the user.

The author emphasizes that responsibility for security should not be shifted to users who are forced to read complex agreements. Uniform mandatory standards are needed for all suppliers of AI assistants.

As a guide, he points to the OECD Principles on AI (first adopted in 2019 and updated in 2024 to take into account technological and policy changes), which provide transparency, safety, and a human-centered approach to preserve the benefits of technology while minimizing threats.

This approach should strike a balance between innovation and protection of rights, so that AI assistants remain useful tools rather than a source of potential danger.


(The text was translated automatically)