Advertisment

OpenAI bans a political chatbot developer to curb misinformation

On January 20, OpenAI banned the developers of Dean.Bot, a ChatGPT-powered chatbot supporting Rep. Dean Phillips, for violating usage policies. This marks one of its first public interventions following new commitments for the election season.

author-image
Social Samosa
New Update
OpenAI political chatbot ban

According to The Washington Post, OpenAI took action on January 20 by banning the creators of Dean.Bot, a chatbot powered by ChatGPT. The bot aimed to boost interest in the Democratic candidate from Minnesota, Rep. Dean Phillips, but OpenAI cited a violation of usage policies, emphasizing the need for compliance.

The company removed a developer account knowingly breaching API usage guidelines, particularly those prohibiting political campaigning and unauthorized impersonation, as stated in a letter to The Washington Post. 

The chatbot was taken down shortly after the story of its launch was published. However, developers attempted to sustain it using alternative APIs. Delphi, an AI startup, developed the bot as part of a project commissioned by the recently established Super PAC, We Deserve Better. This Super PAC was founded by Silicon Valley entrepreneurs Matt Krisiloff and Jed Somer. 

The chatbot provided a platform for potential voters to engage in a "conversation" with Phillips and listen to his campaign messages. The initial interaction included a screen disclaimer stating that the bot was not real and affiliated with We Deserve Better. Currently, the website notifies visitors with an out-of-order message, indicating that "Apologies, DeanBot is away campaigning right now!"

This removal of the bot is one of the initial public interventions following OpenAI's release of new commitments for the election season. It suggests an immediate effort to control campaign-related information using OpenAI's technology.

On January 16, OpenAI shared its comprehensive strategy to address the role of AI in the upcoming presidential election. This move is seen as a critical intersection of politics and technology, focusing on combating AI-driven misinformation. The company introduced new usage policies and commitments to uphold the integrity of the election, encompassing:

  1. Enhanced transparency regarding the origin of images, detailing the tools used, including DALL-E.
  2. Updates to ChatGPT's news sources, incorporating attributions and links in responses.
  3. Collaboration with the National Association of Secretaries of State (NASS) to integrate accurate voting information into specific procedural queries.

The company had already implemented a policy prohibiting developers from creating applications for political campaigning, lobbying, or chatbots impersonating real individuals, including candidates or government entities. The company also restricts applications that discourage democratic participation by spreading inaccurate voting information or affecting eligibility.

Concerns about technology's role in disseminating information and influencing voter groups have intensified in the lead-up to the election. AI, in particular, has become a grey area in the guidelines of many social media companies. Watchdogs, advocates, and even the FCC expressed concerns about the potential misuse of AI voice cloning and the rise of increasingly convincing deepfakes.

In December, two nonprofits reported that Microsoft's AI chatbot Copilot failed to provide accurate election information and propagated misinformation.

In response, some entities are opting to establish more stringent policies for political campaigning, such as those announced by Google and Meta last year. However, the debate continues regarding complete content removal and the impact of AI-generated content on a vulnerable audience amid declining media literacy.

 

 

misinformation ChatGPT Open AI Co Pilot Dean.bot Dean Phillips Microsoft Co Pilot Chat bot election misinformation