New Delhi: Large Language models (LLMs), such as OpenAI’s GPT-4, are top-rated now thanks to their unmatched text analysis and production capacity. However, organizations deploying LLMs for specialized tasks, like generating creative copy in a brand’s style, may need help with their generalist character.
Even the best LLMs need more consistency when the instructions get too precise. One alternative is to fine-tune or narrow the scope of an LLM. But it’s frequently tricky technically and expensive as well. Reka, which came out of stealth today with $58 million, was formed by a group of researchers from DeepMind, Google, Baidu, and Meta who were driven to discover a simpler solution. DST Global Partners and Radical Ventures led the tranche with participation from strategic partner Snowflake Ventures, alongside a cohort of angel investors that included former GitHub CEO Nat Friedman.
Based in San Francisco, Dani Yogatama, Cyprien de Masson d’Autume, Qi Liu Head, and Yi Tay are the creators of Reka. The four co-founders of DeepMind claim that when developing AI systems like AlphaCode and Bard, they concluded that it was unrealistic to anticipate the deployment of a massive LLM for all conceivable use cases.
“We understand the transformative power of AI and would like to bring the benefits of this technology to the world in a responsible way,” Yogatama told in an email interview. “Reka is a research and product company that develops models to benefit humanity, organizations and enterprises.”
Yasa is a multimodal AI “assistant” that has been taught to comprehend words, phrases, photos, videos, tabular data, and other media in addition to text. According to Yogatama, it may produce concepts, provide straightforward answers, and draw conclusions from an organisation’s internal data.
In this regard, Yasa is comparable to models like GPT-4, which can also comprehend text and graphics and is still in closed beta. The twist, though, is that Yasa is easily adaptable to private data and applications.
“Our technology allows enterprises to benefit from progress in LLMs in a way that satisfies their deployment constraints without requiring a team of in-house expert AI engineers,” Yogatama said.
Yasa is only the beginning. Reka will then focus on AI that can absorb and produce even more data, continuously learn, and stay current without needing to be restrained.
Reka further offers a service to adapt LLMs it created to unique or confidential company datasets; however, this service is currently only accessible to a few customers. Customers can run the “distilled” models on their own infrastructure or through Reka’s API, depending on the application and project restrictions.
Not to be outdone, market leaders like OpenAI now provide models with tools for adjusting and connecting them to the internet and other sources to keep them up to date.
But one early client (and investor), snowflake, was won over by Reka’s sales presentation. Snowflake teamed with the business to enable Snowflake clients to deploy Yasa from their accounts. The big data analytics firm Appen recently revealed that it is collaborating with Reka to create customized multimodal model-powered apps for the enterprise.
Reka is unique because it offers every business the power and potential of an LLM without putting up with many tradeoffs. Reka’s distilled Yasa models keep the data within the enterprise. They’re incredibly efficient in cost and energy and don’t require costly research teams to build models from scratch. If every business becomes an “AI” business, Reka aims to give each enterprise its production-quality foundation model.