17.07.2025 11:09:00
Дата публикации
The European Commission has published a report by the Centre for European Policy Studies (CEPS), summarising the views of participants in two public consultations on key articles of the AI Act. The study addresses 88 questions on how to define AI systems and what practices are prohibited.
The consultations started in February 2025 and covered more than 390 responses. The majority (47.2%) came from industry representatives, while citizens provided only 5.7% of responses. This disproportion highlights the predominance of technical and commercial views.
The participants agree that basic technical terms need to be worked out. “Autonomy”, “adaptation” and “inference” (the ability to draw conclusions from input data) are understood as very different things, and without clear criteria, ordinary programs, such as Excel with formulas, may fall under the definition of AI.
The responses suggest introducing a “complexity threshold”: separating learning and self-learning systems from classic deterministic algorithms. To avoid confusion, they suggest publishing a list of “non-AI” examples.
In terms of prohibitions, experts are most concerned about emotion recognition in HR and education, opaque manipulation, social scoring, and remote biometric monitoring. They need clear cases of “what is allowed and what is not.”
Important points remain unclear. How to define “significant harm”? Where is the line between legitimate influence and manipulation? How to distinguish risk assessment from illegal crime prediction?
A significant number of survey responses emphasized the need to distinguish between acceptable uses of biometric data, especially in the context of law enforcement, and uses that could lead to discrimination. Respondents called for specific examples of acceptable practice, especially regarding the labeling and filtering of legally obtained biometric data sets.
The division of responsibility is particularly discussed: who is responsible - the developer or the user? Thus, small companies warn about the risks of excessive workload and ask for practical guides, templates and government support.
Based on these reviews, the commission is preparing the first recommendations. They will help suppliers and stakeholders apply the AI Act in practice, but will take into account new cases and accumulated experience. The publication of clarifications is planned for 2025.
We believe that such expert discussions on AI regulation are extremely important, given the widespread use of the technology.
We also recall that the draft law "On Artificial Intelligence", designed to lay the foundations for legal regulation in this area in Kazakhstan, included a number of provisions developed with the participation of EDF experts. In particular, norms aimed at prohibiting risky practices of using AI in analyzing biometric data of citizens.
The same applies to the need to limit the use of AI in sensitive scenarios - when recognizing faces in public places, when exploiting human vulnerabilities and influencing behavior without informed consent.
(text translation is done automatically)
The consultations started in February 2025 and covered more than 390 responses. The majority (47.2%) came from industry representatives, while citizens provided only 5.7% of responses. This disproportion highlights the predominance of technical and commercial views.
The participants agree that basic technical terms need to be worked out. “Autonomy”, “adaptation” and “inference” (the ability to draw conclusions from input data) are understood as very different things, and without clear criteria, ordinary programs, such as Excel with formulas, may fall under the definition of AI.
The responses suggest introducing a “complexity threshold”: separating learning and self-learning systems from classic deterministic algorithms. To avoid confusion, they suggest publishing a list of “non-AI” examples.
In terms of prohibitions, experts are most concerned about emotion recognition in HR and education, opaque manipulation, social scoring, and remote biometric monitoring. They need clear cases of “what is allowed and what is not.”
Important points remain unclear. How to define “significant harm”? Where is the line between legitimate influence and manipulation? How to distinguish risk assessment from illegal crime prediction?
A significant number of survey responses emphasized the need to distinguish between acceptable uses of biometric data, especially in the context of law enforcement, and uses that could lead to discrimination. Respondents called for specific examples of acceptable practice, especially regarding the labeling and filtering of legally obtained biometric data sets.
The division of responsibility is particularly discussed: who is responsible - the developer or the user? Thus, small companies warn about the risks of excessive workload and ask for practical guides, templates and government support.
Based on these reviews, the commission is preparing the first recommendations. They will help suppliers and stakeholders apply the AI Act in practice, but will take into account new cases and accumulated experience. The publication of clarifications is planned for 2025.
We believe that such expert discussions on AI regulation are extremely important, given the widespread use of the technology.
We also recall that the draft law "On Artificial Intelligence", designed to lay the foundations for legal regulation in this area in Kazakhstan, included a number of provisions developed with the participation of EDF experts. In particular, norms aimed at prohibiting risky practices of using AI in analyzing biometric data of citizens.
The same applies to the need to limit the use of AI in sensitive scenarios - when recognizing faces in public places, when exploiting human vulnerabilities and influencing behavior without informed consent.
(text translation is done automatically)