From AI’s rapid evolution potentially triggering societal-scale disruptions to the fear of it becoming civilization-ending, the speculations around artificial intelligence (AI) are rampant. Going beyond the headlines, we delve into the real implications of AI from the perspective of leading AI researchers.
Dispelling the 'extinction-level' threat myth
Statements referring to AI as an 'extinction-level' threat have been making rounds in recent times, leading to widespread fear and speculation. However, AI experts from UM-Dearborn, including Professors Hafiz Malik, Samir Rawashdeh, and Birhanu Eshete, debunk this notion. They emphasize that current AI capabilities are task-specific and not comparable to human-like general intelligence. Despite AI's ability to perform impressive tasks such as beating human chess players or diagnosing illnesses, these are strictly constrained, task-specific abilities. The concept of artificial general intelligence (AGI), where AI can adapt to new circumstances similarly to humans, is still far from reality.
AI domination: A potential societal risk
While the fear of AGI might be overblown, AI experts voice concerns over another potential risk: the domination of the AI sector by a handful of powerful companies. These companies primarily develop AI for commercial purposes rather than societal benefit, leading to possible dependence on AI in various sectors. Such a scenario could perpetuate societal inequality, making it challenging to simply 'unplug' the technology without causing significant economic disruption.
AI's role in disinformation and deepfakes
Alongside the dominance issue, there's another discernible AI-induced threat: the rise of disinformation campaigns and deepfakes. Experts highlight how AI has amplified disinformation's impact, leading to societal polarization and a loss of trust in information and democratic institutions. As AI technology advances, we're seeing its misuse in creating increasingly convincing deepfakes and even in perpetuating criminal scams, further eroding public trust.
Given AI's potential implications, there have been concerted efforts to regulate its use. The European Union has been at the forefront, passing a draft version of the EU AI Act that limits facial recognition software use and requires creators of generative AI systems to be more transparent about their data usage. In the U.S., the AI Bill of Rights aims to direct the design, use, and deployment of automated systems to protect the public. However, the challenge lies in reaching consensus among AI experts on AGI's potential capabilities and the risks that should be prioritized.
In the grand scheme of things, the reshaping of our world by AI and whether its impacts are beneficial or harmful hinge largely on human control and decision-making. While AI could potentially pose threats, the experts agree that if we end up in a place where AI seriously threatens civilization, the fault will likely be ours, not the machines'. This underscores the importance of responsible AI development and deployment.