22.08.2024 16:02:00
Дата публикации
OpenAI continues to actively combat attempts to use its technology to manipulate public opinion and influence the political process. Recently, the company suspended several ChatGPT accounts associated with an Iranian operation to interfere in the US elections.
To do this, AI was used to generate articles and social media posts aimed at shaping public opinion about the upcoming elections, but the audience reach of this operation remained insignificant.
This incident is not the first time that ChatGPT developers have taken action against abuses by states using the popular chatbot for nefarious purposes.
In May 2024, the company stopped five campaigns aimed at manipulating public opinion, similarly using the capabilities of generative AI.
In January, OpenAI announced that it would not allow its technology to be used for political campaigning or lobbying until researchers had a better understanding of the potential vectors of abuse and the consequences associated with them.
The restrictions included a ban on both registering accounts from official election campaign staff and creating chats on behalf of certain candidates.
But interestingly, in its research that identified the Iranian group Storm-2035, which created websites imitating news outlets and also heated up discussions between polarized communities on social media (primarily on X), OpenAI found that most of these posts did not receive significant feedback.
This once again confirms that such operations, although easily launched thanks to AI tools, rarely achieve significant success in attracting an audience. However, as the elections approach and online discussions intensify, we should expect more such notifications, the company is confident.
OpenAI continues to develop its models with safety in mind and actively intervenes in cases of technology abuse, such as operations to influence public opinion.
Despite the difficulties in monitoring the distribution of AI content, the company is now aiming to detect and prevent such abuses at scale.
(text translation is done automatically)
To do this, AI was used to generate articles and social media posts aimed at shaping public opinion about the upcoming elections, but the audience reach of this operation remained insignificant.
This incident is not the first time that ChatGPT developers have taken action against abuses by states using the popular chatbot for nefarious purposes.
In May 2024, the company stopped five campaigns aimed at manipulating public opinion, similarly using the capabilities of generative AI.
In January, OpenAI announced that it would not allow its technology to be used for political campaigning or lobbying until researchers had a better understanding of the potential vectors of abuse and the consequences associated with them.
The restrictions included a ban on both registering accounts from official election campaign staff and creating chats on behalf of certain candidates.
But interestingly, in its research that identified the Iranian group Storm-2035, which created websites imitating news outlets and also heated up discussions between polarized communities on social media (primarily on X), OpenAI found that most of these posts did not receive significant feedback.
This once again confirms that such operations, although easily launched thanks to AI tools, rarely achieve significant success in attracting an audience. However, as the elections approach and online discussions intensify, we should expect more such notifications, the company is confident.
OpenAI continues to develop its models with safety in mind and actively intervenes in cases of technology abuse, such as operations to influence public opinion.
Despite the difficulties in monitoring the distribution of AI content, the company is now aiming to detect and prevent such abuses at scale.
(text translation is done automatically)