OpenAI, with an aim to enforce policies that prevent abuse and to improve transparency around AI-generated content, claims to have disrupted several influence operations (IO) that attempted to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them. In a report that was released on its blog, OpenAI categorically mentions that it foiled activities focused on the Indian elections 2024 less than 24 hours after it began.
Also read: Indian Government Conducts National Stakeholder Workshop on Safe, Trusted and Ethical AI
“The content posted by these various operations focused on a wide range of issues, including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments,” said OpenAI in its blog.
How OpenAI Foiled Attempts Focused on Indian Elections 2024
OpenAI identified an operation that it nicknamed “Zero Zeno”, which involved a commercial company in Israel called STOIC, generating content about the Gaza conflict, and to a lesser extent the Histadrut trade unions organization in Israel and the Indian elections 2024. “So far, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of their use of our services,” said OpenAI.
The campaign, the company said, began in early May and it involved generating web articles and social media comments that were then posted across multiple platforms, notably Instagram, Facebook, and X. OpenAI identified the operation’s fake accounts commenting on social-media posts made by the operation itself, likely in an attempt to create the impression of audience engagement.
OpenAI states that the operation involved targeting the ruling party BJP. “Finally, in May, the network began generating comments that focused on India, criticized the ruling BJP party and praised the opposition Congress party,” said the company. The organization has expressed its firm commitment to developing safe and responsible artificial intelligence (AI). This commitment involves designing AI models with safety as a primary consideration and taking proactive measures to prevent malicious use.
No comments