Google's Gemini AI's Diversity Overload and ChatGPT's Gibberish: A Look into AI Glitches

NNicholas February 22, 2024 7:01 AM

When artificial intelligence (AI) systems like Google's Gemini and ChatGPT go haywire, the results range from amusing to concerning. With Gemini creating 'diverse' historical figures, and ChatGPT spouting nonsensical Spanglish, it's evident that AI still has some kinks to iron out.

Google's Gemini faces diversity debacle

When it comes to racial representation, artificial intelligence (AI) systems like Google's Gemini have a path to tread carefully. The Gemini model recently stirred controversy when it generated anachronistic images of diverse historical figures. What should have been a move towards inclusivity backfired, as the AI produced images like a black Roman emperor, an Asian Albert Einstein, and a 'diverse' Mount Rushmore. This glaring mistake received significant backlash online, prompting Google to address the issue. According to Jack Krawczyk, Google Gemini Experiences product lead, the company is working diligently to fix the inaccuracies.

Potential ideological bias in AI

As artificial intelligence systems become increasingly prevalent, concerns about potential bias and censorship grow. According to Marc Andreessen, co-founder of Andreessen Horowitz (a16z), the commercialization of AI presents a troubling possibility. Andreessen warns that the centralized control of AI systems by a few large companies could lead to a rise in ideological bias in AI outputs. The scenario is akin to a media landscape dominated by a few outlets, where diversity of thought could be stifled, and the free flow of information might be at risk.

The call for open-source AI models

In the face of potential bias in AI systems, experts advocate for diversity and open-source models. Yann LeCun, Meta’s chief AI scientist, highlights the need for a diversity of open-source AI models on which specialized models can be built. Likewise, Bindu Reddy, the CEO of Abacus AI, insists on the importance of open-source large language models (LLMs) to prevent historical distortion by proprietary LLMs. These calls emphasize the necessity of a free and diverse set of AI systems and its parallel to the need for a free and diverse press.

Even with the best training data, artificial intelligence systems can sometimes go haywire. A recent example is OpenAI's ChatGPT. Following a recent upgrade with new data and hotfixes, users reported the chatbot producing Spanglish gibberish and getting stuck in infinite loops. While these gaffes may seem amusing, they also point to the occasional unpredictability of AI systems and their potential to produce unexpected and nonsensical outputs.

As artificial intelligence continues to advance, so too does the technology to distinguish humans from AI. Animoca Brands and Polygon Labs have introduced the Humanity Protocol, a project that uses palm recognition technology and blockchain to verify human users. This technology integrates with users' mobile phones and uses zero-knowledge proofs to ensure privacy while validating credentials. As we continue to navigate a digital world populated by AI, such tools contribute to maintaining the distinction between human and machine.

Advances in AI-generated videos

The realm of AI-generated videos is seeing significant advancements. OpenAI's Sora, a text-to-video generation tool, has been gaining attention for its impressive capabilities. Sora uses diffusion and transformer architecture to convert random noise into sequential video frames, resulting in convincingly realistic videos. This progress in AI video generation demonstrates the continually improving capabilities of AI systems and their potential to blur the line between reality and AI-generated content.

More articles

Also read

Here are some interesting articles on other sites from our network.