Former Google executive, Mustafa Suleyman, who played a pioneering role in Google's artificial intelligence (AI) venture, expresses his fears that AI could enable the creation of 'more lethal pandemics'. He argues that the unchecked growth and use of AI could lead to dangerous experimentation with deadly pathogens, and calls for strict regulation.
AI's potential misuse in bioengineering
Mustafa Suleyman, once the head of applied AI at Google’s DeepMind, sounds an alarming note on the potential misuse of artificial intelligence. His concerns are not unfounded as AI, in the wrong hands, could lead to the creation of 'more lethal pandemics'. He envisions a dark scenario where people experiment with engineered synthetic pathogens that might end up being more transmissible, leading to widespread harm or even global pandemics. This highlights the urgent need for more stringent regulation and containment of AI software, according to Suleyman.
AI and the risks of bioengineering
Suleyman's biggest fear is the integration of AI with bioengineering. He projects a terrifying future scenario where even a 'kid in Russia' could potentially engineer a pathogen more lethal than anything the world has encountered, using AI. The risk is not just hypothetical; it's a reality that we might face within the next five years. Therefore, it's crucial to limit access to the tools and knowledge necessary for such dangerous experiments, says Suleyman.
Need for tight AI regulation
In light of the potential dangers of AI, Suleyman calls for stricter regulation and containment. The tools, the software, the cloud environments, and even certain substances – all need to have restricted access to prevent misuse. According to him, the tech industry insiders who are closest to the work can foresee the risks and it's high time we get control over it. This call for caution and tighter control is not just from Suleyman, but is echoed by dozens of tech tycoons around the world.
Suleyman isn't alone in his concerns about the potential dangers of AI. Tech luminaries including Elon Musk have voiced similar fears. Musk has even gone so far as to warn about AI going 'Terminator' on humans – a grim reference to the sci-fi movie where robots wage a war against humanity. These tech moguls are calling for a proactive approach towards AI by slowing down its advancement and ensuring it 'first does no harm'.