Generative AI has emerged as a key player in the burgeoning AI movement, driven by popular applications such as ChatGPT. However, it remains a cause for concern within enterprise settings due to issues around security and data privacy.
The Problem with Generative AI Security
Large language models (LLMs) are the engines behind generative AI and enable machines to understand and generate text just like a human. But whether you want such an application to write a poem or summarize a legal contract, it needs instructions to guide its output. These ‘prompts,’ however, can be constructed in such a way as to trick the application into doing something it’s not supposed to, such as divulging confidential data that was used to train it, or give unauthorized access to private systems.
What are Prompt Injections?
Prompt injections are a real and growing concern. They refer to the practice of creating malicious prompts designed to trick the generative AI application into performing an action it’s not supposed to do. This can include data leakage, unauthorized access, or other security vulnerabilities. Lakera is specifically addressing this issue with its innovative technology.
Meet Lakera
Founded out of Zurich in 2021, Lakera officially launched last October with $10 million in funding. The company has been working tirelessly to protect organizations from LLM security weaknesses such as data leakage or prompt injections. Lakera works with any LLM, including OpenAI’s GPT-X, Google’s Bard, Meta’s Llama, and Anthropic’s Claude.
How Does Lakera Work?
At its core, Lakera is pitched as a ‘low-latency AI application firewall’ that secures traffic into and out of generative AI applications. The company’s inaugural product, Lakera Guard, is built on a database that collates insights from myriad sources, including publicly available ‘open source’ datasets such as those hosted on Hugging Face, in-house machine learning research, and a curious interactive game it developed called Gandalf.
Gandalf: The Interactive Game
Lakera’s Gandalf game invites users to attempt to trick the application into revealing a secret password. As users progress through levels, the game gets more sophisticated (and thus more difficult to ‘hack’). These interactions have enabled Lakera to build what it calls a ‘prompt injection taxonomy’ that separates such attacks into categories.
Real-time Detection of Malicious Attacks
Lakera’s technology uses machine learning algorithms to detect and prevent malicious attacks in real-time. This means that organizations can rest assured that their generative AI applications are secure and protected from potential threats.
Partnerships and Collaborations
Lakera has already partnered with several leading organizations in the field, including OpenAI, Google, Meta, and Anthropic. These partnerships demonstrate Lakera’s commitment to working closely with industry leaders to ensure that its technology is effective and integrated into existing systems seamlessly.
What’s Next for Lakera?
With this significant investment, Lakera will continue to drive innovation in the field of generative AI security. The company plans to expand its product line, enhance its machine learning algorithms, and deepen its partnerships with industry leaders.
Conclusion
Lakera is at the forefront of solving one of the most pressing issues in the field of generative AI: security. With its innovative technology and commitment to collaboration, Lakera is poised to become a leading player in this rapidly evolving space.