As tech giants continue to harness and exploit vast amounts of personal data to train their AI systems, a legal backlash is underway. The focus of the lawsuits is the illegal extraction of personal data by companies for AI training, which poses significant risks to privacy and security. The pursuit for regulatory checks and user compensation forms the core of this escalating legal tussle.
Recognizing AI's profound societal risks
The potential of AI is astonishing, with possibilities in areas like disease curing and climate change tackling. However, the risks are just as significant. Industry leaders, including Sam Altman from OpenAI, Demis Hassabis from Google Deepmind, and former Microsoft CEO Bill Gates, have openly acknowledged these dangers. They've emphasized that mitigating the risk of extinction from AI should be a global priority, alongside other societal scale risks such as pandemics and nuclear war. An open letter signed by over 1,000 technology leaders and experts calls for a six-month halt in AI development due to the 'profound risks' it presents to society and humanity.
AI's data appetite: A threat to privacy
It's alleged that tech juggernauts such as OpenAI and Google have surreptitiously scraped and collected an unimaginable volume of personal data to train their AI systems. This data encompasses a wide spectrum, including creative expressions, professional teachings, copyrighted works, and even personal conversations and comments. By consolidating and analyzing this data, these companies have effectively created digital clones of individuals, enabling them to predict and manipulate user behavior and misappropriate their skill sets. This kind of data mining not only infringes on privacy rights but also paves the way for potential misuse and manipulation.
A lawsuit has been filed against OpenAI and Microsoft, pushing for a temporary halt on AI use and development until these companies can demonstrate the safety of their products and provide effective privacy and property protections. The lawsuit also demands recognition of the value of the information they've taken from users. These companies have seen their market caps increase by hundreds of billions of dollars since launching their AI products, a value that is largely attributable to the stolen data. The suit argues that people are entitled to compensation for what was taken from them in the form of 'data dividends,' a percentage of the revenues these AI products generate from the stolen data.