17.12.2025 08:38:00
Дата публикации
In recent years, governments worldwide have introduced digital IDs and required online companies to verify users’ ages.
Presented as child protection, these measures in practice create new privacy risks, according to the Electronic Frontier Foundation (EFF).
Experiences in the UK, Australia, and the US show that mandatory verification and restrictions lead to site closures due to reduced traffic and excessive data requests. Adults also face limits: access to lawful content is blocked if they refuse to share personal data with third parties. Age filters thus become tools of censorship.
Children’s online risks fall into three categories: content risks (violence, harmful materials), behavioral risks (bullying, over‑dependence), and contact risks (grooming, coercion). These threats cannot be eliminated by technical age checks.
More effective solutions remain parental controls and family accounts, which allow flexible access management. Studies show only half of parents use such tools.
Expanding functionality and simplifying interfaces could increase adoption and reduce risks without government intervention. Companies are already acting: Android simplified app blocking and content filtering, while Apple introduced APIs to set age ranges without sharing birth dates.
Yet parental controls do not solve the problem of children living in unsafe digital environments. Strong privacy laws and safeguards against state and corporate abuse are more important.
Advocates propose banning behavioral targeted advertising, which drives data collection, and restricting data brokers. This would reduce surveillance and create a safer digital space for all users, including teenagers.
Rights groups also urge rejection of mandatory age verification and support for data protection laws. Only then can anonymity, freedom, and safety online be preserved for children and adults alike.
Presented as child protection, these measures in practice create new privacy risks, according to the Electronic Frontier Foundation (EFF).
Experiences in the UK, Australia, and the US show that mandatory verification and restrictions lead to site closures due to reduced traffic and excessive data requests. Adults also face limits: access to lawful content is blocked if they refuse to share personal data with third parties. Age filters thus become tools of censorship.
Children’s online risks fall into three categories: content risks (violence, harmful materials), behavioral risks (bullying, over‑dependence), and contact risks (grooming, coercion). These threats cannot be eliminated by technical age checks.
More effective solutions remain parental controls and family accounts, which allow flexible access management. Studies show only half of parents use such tools.
Expanding functionality and simplifying interfaces could increase adoption and reduce risks without government intervention. Companies are already acting: Android simplified app blocking and content filtering, while Apple introduced APIs to set age ranges without sharing birth dates.
Yet parental controls do not solve the problem of children living in unsafe digital environments. Strong privacy laws and safeguards against state and corporate abuse are more important.
Advocates propose banning behavioral targeted advertising, which drives data collection, and restricting data brokers. This would reduce surveillance and create a safer digital space for all users, including teenagers.
Rights groups also urge rejection of mandatory age verification and support for data protection laws. Only then can anonymity, freedom, and safety online be preserved for children and adults alike.