OpenAI's efforts to reduce factual inaccuracies in its ChatGPT chatbot are insufficient for full compliance with European Union data regulations, according to a task force from the EU's privacy watchdog.
The task force noted in a report released on Friday that while the measures taken to improve transparency help prevent misinterpretation of ChatGPT's outputs, they do not fully address the data accuracy requirements. This task force, established last year by the body representing Europe’s national privacy watchdogs, was prompted by concerns from national regulators, particularly Italy's authority, regarding the popular AI service.
The report indicated that ongoing investigations by national privacy watchdogs in several member states have yet to conclude, making it impossible to provide a comprehensive summary of the results at this time. The current findings represent a consensus among the national authorities.
The report also highlighted that due to the probabilistic nature of ChatGPT's training approach, the model can generate biased or fabricated outputs. Additionally, end users are likely to consider the information provided by ChatGPT as factually accurate, including details about individuals, regardless of its actual accuracy.