Musk vs. Altman: The race to build a 'friendly' A.I. and save humanity

JJohn July 18, 2023 9:02 PM

Elon Musk and Sam Altman, once colleagues, now compete to construct a 'superintelligent' A.I., aligning with human interests rather than posing existential risks. Their approaches, however, vary, reflecting differing perspectives on the delicate balance between innovation, control, and potential threats to humanity.

Musk and Altman's race for A.I. supremacy

Once former colleagues, Elon Musk and Sam Altman, now find themselves in a race of their own. Their goal? To develop a 'superintelligent' A.I. that aligns with human interests, rather than posing an existential threat. While their shared objective is clear, their strategies differ, providing an intriguing insight into the complex world of A.I. development and the challenges inherent in managing such profound technological advancement.

Musk's mission: An AGI that 'understands'

Elon Musk, through his new venture xAI, is setting his sights on an ambitious goal - to develop an artificial general intelligence (AGI). Unlike conventional A.I., an AGI would have human-like cognitive abilities, enabling it to 'understand the universe,' as Musk puts it. Such a leap in A.I. capabilities, however, is still probably a decade away, but Musk's vision undeniably sets a high bar for the future of A.I. development.

Musk's philosophy to A.I. development hinges on the creation of a 'good' A.I., one that is driven by an insatiable curiosity and a relentless pursuit of truth. Critics, however, raise concerns over this approach. Truth, after all, can be subjective and elusive. The challenge lies not just in building an A.I. that seeks truth, but in defining what 'truth' means in the context of an artificial intelligence and avoiding the pitfalls of confirmation bias.

Gary Gensler, Chair of the Securities and Exchange Commission, expressed concerns about the disruptive potential of A.I. in financial markets. The risk? That A.I. could lead to 'monocultures,' where investors make similar decisions based on the same data sources, creating a potential for financial crises. This highlights the need for a careful and regulated approach to A.I. development, given its potential to significantly influence financial systems and economies.

OpenAI's control strategy for superintelligent A.I.

While Musk seeks to create a 'truth-seeking' A.I., Sam Altman and his team at OpenAI are taking a more cautious approach. They're developing an 'automated alignment researcher,' essentially a superintelligent A.I. designed to control other superintelligent A.I.s. Their goal is to prevent a potentially superintelligent A.I. from 'going rogue,' reflecting a more controlled and cautious approach to A.I. advancement.

More articles

Also read

Here are some interesting articles on other sites from our network.