Скопировано

AI companies must protect user chats from mass surveillance

15.12.2025 11:33:00
Дата публикации

The Electronic Frontier Foundation (EFF) has called on developers of AI chatbots to strengthen safeguards against unauthorized access or leaks of user conversations. According to EFF, chats with AI have reached the same level of sensitivity as private messaging or email and should be protected by equivalent legal guarantees.

Millions of people use AI to discuss ideas, seek advice, or share concerns they would not make public. Such dialogues may include sensitive queries about health, finances, personal safety, or family issues. A single chat can reveal more intimate details than a traditional diary.

EFF stresses that without strong privacy guarantees, users will avoid AI services for learning, self-expression, and support. Measures to build trust include: 

▪️ End-to-end encryption; 

▪️ Access control with strict authorization and monitoring of requests from individuals and authorities; 

▪️ Data minimization and confidential modes where conversations are not stored or used for training.

If companies retain large amounts of user data, private parties and law enforcement will inevitably demand disclosure. EFF argues that AI firms must prepare in advance and side with users. Minimum commitments include resisting mass data requests, notifying users of disclosure demands, and publishing regular transparency reports.