F.T.C. Opens Investigation Into ChatGPT Maker Over Technology’s Potential Harms
The Federal Trade Commission has opened an investigation into OpenAI, the artificial intelligence start-up that makes ChatGPT, over whether the chatbot has harmed consumers through its collection of data and its publication of false information on individuals.
In a 20-page letter sent to the San Francisco company this week, the agency said it was also looking into OpenAI’s security practices. The F.T.C. asked OpenAI dozens of questions in its letter, including how the start-up trains its A.I. models and treats personal data, and said the company should provide the agency with documents and details.
The F.T.C. is examining whether OpenAI “engaged in unfair or deceptive privacy or data security practices or engaged in unfair or deceptive practices relating to risks of harm to consumers,” the letter said.
The investigation was reported earlier by The Washington Post and confirmed by a person familiar with the investigation.
The F.T.C. investigation poses the first major U.S. regulatory threat to OpenAI, one of the highest-profile A.I. companies, and signals that the technology may increasingly come under scrutiny as people, businesses and governments use more A.I.-powered products. The rapidly evolving technology has raised alarms as chatbots, which can generate answers in response to prompts, have the potential to replace people in their jobs and spread disinformation.
Sam Altman, who leads OpenAI, has said the fast-growing A.I. industry needs to be regulated. In May, he testified in Congress to invite A.I. legislation and has visited hundreds of lawmakers, aiming to set a policy agenda for the technology.
On Thursday, he tweeted that it was “super important” that OpenAI’s technology was safe. He added, “We are confident we follow the law” and will work with the agency.
OpenAI has already come under regulatory pressure internationally. In March, Italy’s data protection authority banned ChatGPT, saying OpenAI unlawfully collected personal data from users and did not have an age-verification system in place to prevent minors from being exposed to illicit material. OpenAI restored access to the system the next month, saying it had made the changes the Italian authority asked for.
The F.T.C. is acting on A.I. with notable speed, opening an investigation less than a year after OpenAI introduced ChatGPT. Lina Khan, the F.T.C. chair, has said tech companies should be regulated while technologies are nascent, rather than only when they become mature. Editors’ Picks Therapy Where the Clothes Come Off (Sometimes) Does Sugar Actually Feed Cancer? For Many Small-Business Owners, a Necessary Shift to Digital Payments
In the past, the agency typically began investigations after a major public misstep by a company, such as opening an inquiry into Meta’s privacy practices after reports that it shared user data with a political consulting firm, Cambridge Analytica, in 2018.
Ms. Khan, who testified at a House committee hearing on Thursday over the agency’s practices, previously said the A.I. industry needed scrutiny.
“Although these tools are novel, they are not exempt from existing rules, and the F.T.C. will vigorously enforce the laws we are charged with administering, even in this new market,” she wrote in a guest essay in The New York Times in May. “While the technology is moving swiftly, we already can see several risks.”
Posted on: 7/14/2023 2:25:26 PM
|