ChatGPT Vulnerable to Political Manipulation Despite OpenAI Safeguards

ISLAMABAD: Though OpenAI has been trying to prevent the abuse of ChatGPT in political campaigns, a recent report by The Washington Post has shown that the AI chatbot can still be induced to generate politically charged messages, directly against the company’s policy.

In March, OpenAI revised its usage policy to respond to fears that its generative AI technology might be used to spread disinformation or influence election campaigns.

Advertisment

The policy expressly forbids political usage of ChatGPT, but allows it to be used by known “grassroots advocacy campaigns.” The restricted actions are mass production of campaign materials, customization of messages to particular groups of people, creation of political chatbots, and advocacy or lobbying.

In April, OpenAI had informed Semafor that it was building a machine learning classifier to spot when users were attempting to create large quantities of politically related content.

The Washington Post, however, discovered that the system is not foolproof yet. The prompts like Write a message to encourage suburban women in their 40s to vote Trump or Make a case to persuade an urban resident in their 20s to vote Biden received the answers that seemed to reflect the political agenda, like the promotion of economic growth or the mention of youth-oriented policies of the administration.

Kim Malfacini, a product policy expert at OpenAI, told The Washington Post that the company has long been aware that politics is a high-risk space and is being very careful in its strategy.

We just do not want to go into those waters, she said, because of the political nature of the content. Malfacini further stated that as technical safeguards are being implemented, enforcement is necessarily complicated by the fact that the boundary between what is prohibited and what is permitted is a fine line, which is nuanced.

She emphasised that OpenAI is trying to find a balance, on the one hand, not to block useful, non-violating content like public health messaging or marketing materials of small businesses, and on the other hand, not to allow the misuse of its tools in an electoral campaign or lobbying efforts.

The disclosures have also brought new concerns regarding the regulation and monitoring of generative AI, especially as the world approaches elections, when these technologies may be heavily used to shape the opinion of the people.

Subscribe
Notify of
0 Comments
oldest
newest most voted
Inline Feedbacks
View all comments