Reka Introduces Yasa-1, a Multimodal AI Competing with ChatGPT

NNicholas October 5, 2023 9:37 AM

Reka, an AI startup founded by researchers from DeepMind, Google, Baidu, and Meta, has unveiled Yasa-1, a multimodal AI assistant that understands and responds to text, images, videos, and audio. The AI's features include multiple language support, internet context answers, and code execution, making it a direct challenger to OpenAI’s ChatGPT.

Unveiling Reka's multimodal AI assistant, Yasa-1

Reka, an artificial intelligence company formed by experts from DeepMind, Google, Baidu, and Meta, has launched an innovative AI assistant called Yasa-1. But Yasa-1 isn't your ordinary AI—it has multimodal capabilities. Which means that besides processing text, it's also designed to interpret images, short videos, and even audio snippets. This advanced functionality propels AI assistance a step further, catering to a wider variety of user needs and usage scenarios.

Diverse capabilities of Yasa-1

Currently available in private preview, Yasa-1 offers ample scope for customization. Businesses can tailor the assistant to their specific needs using private datasets of any modality, thus paving the way for novel user experiences. The assistant's versatility doesn't stop there—it's multilingual, supporting 20 different languages, and it's capable of delivering answers with contextual understanding from the internet. It can also handle long document contexts and, impressively, it can execute code, making it a powerful tool for a range of applications.

With its debut, Yasa-1 has positioned itself as a direct challenger to OpenAI’s ChatGPT, another AI assistant with multimodal capabilities. ChatGPT recently received an upgrade, adding support for visual and audio prompts. As such, the launch of Yasa-1 signifies an exciting moment in the AI sphere as it fuels competition, driving innovation and progress in AI assistant technologies.

Insight into Yasa-1's multimodal capabilities

Yasa-1's multimodal capabilities are made possible through its unique design. Available via APIs and as Docker containers for on-premise or VPC deployment, it uses a single unified model trained by Reka. This model enables Yasa-1 to understand not just words and phrases, but also images, audio, and short video clips. Therefore, users can interact with Yasa-1 in multiple ways, making it a truly multimodal assistant.

Extra features extend Yasa-1's utility

Yasa-1 is not just about multimodality—it's more than that. Aside from understanding 20 different languages and processing long context documents, one of its most impressive features is its ability to actively execute code—though, this feature is exclusive to on-premise deployments. With this capability, it can perform arithmetic operations, analyze spreadsheets, and even create visualizations for specific data points. This adds a whole new level of interactivity and utility to the AI assistant.

Reka isn't resting on its laurels with the launch of Yasa-1. The company has made it clear that it has plans in the pipeline to expand Yasa-1's reach by giving more enterprises access to it. At the same time, it's also committed to enhancing Yasa-1's capabilities and addressing its limitations. It's clear that Reka is set on continuously evolving its product to stay competitive and meet the changing needs of its users.

More articles

Also read

Here are some interesting articles on other sites from our network.