OpenAl faces lawsuit for allegedly stealing & misusing people’s data to train AI
OpenAI has been sued in court for allegedly misusing a large number of people's data to "train" its popular generative artificial intelligence system ChatGPT.

Highlights
- A lawsuit was filed against OpenAI on 28 June
- The complaint alleges that OpenAI has stolen and misused a large number of people’s data from the internet to train its AI tools
OpenAI, whose sensational chatbot ChatGPT has taken the world by storm, seems to have a habit of hogging the limelight, but this time, for all the wrong reasons.
As per the latest reports, a 150-page long class action lawsuit was filed against OpenAI on 28 June in the US District Court for the Northern District of California. Reports say that the plaintiffs in the said lawsuit are individuals represented by the US-based Clarkson Law Firm, P.C. (professional corporation). The said complaint has listed multiple counts such as the violations of the Electronic Communications Privacy Act, the California Invasion Privacy Act, the Computer Fraud and Abuse Act, etc.
AI products pose a risk to humanity, claims lawsuit
The complaint alleges that OpenAI has created, marketed, and operated its AI products unlawfully and harmfully. Also, it has been accused of stealing and misusing a large number of people’s data from the internet and the data thus obtained was used to train their AI tools.
Further, the lawsuit claims, that the company ‘seized’ “essentially every piece of data exchanged on the internet it could take.” And that too without any notice, consent, or just compensation.
It even alleges that the AI products created by OpenAI including ChatGPT pose a potentially catastrophic risk to humanity.
Interestingly, this lawsuit comes at a time, when concerns across the world are escalating regarding the need of regulating this space. Even OpenAI’s CEO, Sam Altman, in May, had testified before a Senate hearing on the risk of AI. He mentioned that regulatory intervention from national governments would play a critical role in mitigating the risks posed by the AI model, whose clout is growing day by day.