In addition to the top 10 vulnerabilities in web applications, the non-profit project OWASP (Open Worldwide Application Security Project) has now also published a top 10 for large language models (LLMs, Large Language Models). In cooperation with around 500 experts from all over the world, the project compiled the most important weaknesses of the AI models in a process lasting several months, which have not left the world alone since the release of ChatGPT at the end of 2022.
Potential Security Risks
The OWASP top 10 list includes the following items:
LLM01: Prompt InjectionLLM02: Insecure Handling of OutputLLM03: Poisoning of Training DataLLM04: Model Denial of ServiceLLM05: Vulnerabilities in the Software Supply ChainLLM06: Disclosure of Sensitive InformationLLM07: Insecure Plugin DesignLLM08: Excessive PermissionsLLM09: Excessive TrustLLM10: Model Theft
Of the ten points, possible vulnerabilities in the software supply chain (LLM05) and insecure plugins (LLM07) can also be found outside of the AI domain. LLM02 is particularly serious in the case of language models, since the systems tend to errors known as hallucinations, but are particularly convincing to the user due to their construction (LLM09). Here the technology increases the problem users. This, in turn, is also evident in LLM06 when sensitive data is used as prompts, which then end up in the model operator’s training data and can theoretically be output again. LLM10 describes the copying and thus stealing of entire models, which can result in economic losses for a company.
Poisoning of the training data (Data Poisoning, LLM03) is particularly important for AI, with attackers trying to falsify the results and output of models from the ground up. The credibility of the models plays a role here again, but too many permissions for a chatbot (LLM08) can lead to unforeseen consequences if the results are incorrect or manipulated. Model Denial of Service (LLM04) attackers feed the models with resource-hungry operations in order to paralyze the system or generate excessive costs.
In addition to the weak points, the OWASP Top 10 for LLMs also offers a handle for ironing out the problems. However, developers should not simply work through the whole thing like a checklist. In addition to web applications and AI, the top 10 also includes API security.
The fact that the dangers are drastic is particularly evident in the often playfully used jailbreaking of chatbots. Users pretend to be admins, for example, and thus access internal documents that are integrated into the language AI. Prompt injection can also be used to automatically break blocks for specific output on sensitive topics or illegal activities. The fact that people often enter sensitive data into the chatbots is proven by Google’s call to its developers not to put any more source code into their in-house chatbot Bard and not to use the results of such debugging.
Detailed information on the OWASP Top 10 for Large Language Model Applications and the lists of vulnerabilities and mitigations can be found on the project’s website.
Go to home page
#OWASP #Top #biggest #weaknesses