If you try to figure out whether we have already passed the peak of the AI hype, a two-part picture emerges at the moment: On the one hand, the user numbers of ChatGPT and Co., which initially went through the roof, are falling again in many places. On the other hand, the number of AI tools, services, applications and use cases continues to explode unabated. But only a few offers have so far managed to remain in the general consciousness. Online discussions are increasingly characterized by critical voices questioning the universal applicability and efficiency of chatbots.
Advertisement
He has a weakness for risks and writing about cyber: In his main job as cofounder and CTO of intcube GmbH, David Fuhr rages and rages in this column about current incidents and general truths of information security.
KI: Here to stay
But the party is by no means over. It is the usual course of things that hyped topics have to struggle through the valley of disappointment after the climax, which inevitably comes. The question is whether the topics from the valley will reappear or disappear forever in the sinking of shattered tech dreams. When it comes to generative AI, there is some evidence that it is here to stay. Not only is the number of skills in which Large Language Models (LLMs) are superior to humans constantly increasing, but also the speed at which the systems reach individual milestones.
We are learning more and more about possible vulnerabilities and attacks on LLMs. To a certain extent, this indicates a normalization of the topic: Just as there are OWASP, MITER, ISO and IT-Grundschutz security standards and best practices for enterprise IT, mobile and cloud, all of this is now slowly coming to AI too -Usage and operation. It becomes clear when looking at version 1.0 of the OWASP Top 10 for Large Language Model Applications, which was published at the beginning of August. In order to cope with the new situation, it is necessary to combine well-known knowledge with new knowledge.
The top ten list includes run-of-the-mill vulnerabilities that would affect any other IT infrastructure, such as Supply Chain Vulnerabilities (LLM05) and Insecure Plugin Design (LLM07). Although these have a special flavor due to the special features of AI apps, weak points in the supply chain and insecurely designed components have been known for years – along with countermeasures and the difficulties of implementing them in practice and on a broad scale.
AI has many weaknesses
Then there are vulnerabilities that get a new twist when they appear in AI systems. This is the case with Insecure Output Handling (LLM02), where we have to be particularly careful about what AI could generate in there due to the unpredictability of the output. With Sensitive Information Disclosure (LLM06), the unpredictability can lead to data leaks. Where the computational intensity and complexity of LLMs make it difficult to guarantee availability at any price, this is called Model Denial of Service (LLM04). Model Theft (LLM10) is ultimately a normal digital theft of IP (Intellectual Property), but can be carried out using completely new methods; for example, skillfully asking individual, innocent questions en masse.
Then of course there are vulnerabilities that can only exist in AI applications. For example, training data poisoning (LLM03) is a problem only because machine learning, by definition, relies on training data. Prompt Injection (LLM01) has long since become the new national sport. The attempt is to use cleverly manipulated prompts to seduce LLMs into deviating from the path of their alignment and being less politically correct or intentionally giving wrong answers.
Finally, there are those vulnerabilities that point beyond the actual problems of AI and back at ourselves: Excessive Agency (LLM08) and Overreliance (LLM09). The fact that we increasingly rely, consciously or unconsciously, on the output of AI models, their assessments or even their decisions reveals the true threat potential of the technical revolution as a whole.
Against this background, it is good that we can now use some of the hype energy that has been released to focus on the problems, dangers and limitations of AI. It is important to use this – specifically, without falling into a “we will all die” reflex. There is enough to do.
(pst)
To the homepage
#Security #wont #die