Since the generative AI boom began, major tech companies have been working diligently to develop their own “killer app” for the technology. First it was searching online, with mixed results. Now it is the AI assistants that are supposed to bring big business. Most recently, OpenAI, Meta and Google introduced new features for their AI chatbots that allow them to scour the Internet and act as a kind of personal helper.
Advertisement
OpenAI introduced new ChatGPT features that make it possible to have real conversations with the chatbot – with lifelike synthetic voices. OpenAI also revealed that ChatGPT is capable of searching the web (again). Google’s rival bot, Bard, is now integrated into most parts of the company’s ecosystem – including Gmail, Docs, YouTube and Maps. The idea is that users can use the chatbot to ask questions about their own content – for example, by letting it search their email or organize their calendar. Bard is also said to be able to instantly retrieve information from Google searches. Similarly, shortly after, Meta announced that Facebook parent will use AI chatbots for “everything.” Users will soon be able to ask AI chatbots and even AI avatars in the form of celebrities on WhatsApp, Messenger and Instagram – with the AI model retrieving information online via Microsoft Bing search.
But given the limitations of the technology, it’s all a risky bet. That’s because the tech giants haven’t solved some of the persistent problems with AI language models, including their tendency to make things up and “hallucinate.” What is most worrying, however, is the fact that critics believe this could be a security and data protection catastrophe. Because OpenAI, Google, Microsoft and Meta put faulty technology in the hands of millions of people – and at the same time allow AI models to access sensitive information such as emails, calendars and private messages. In doing so, they could leave us all vulnerable to fraud, phishing, and large-scale hacks.
Language models are insecure
Significant security problems have repeatedly been discovered in AI language models. Now that AI assistants can gain access to personal data while surfing the Internet, they are particularly vulnerable to new types of attacks. This indirect prompt injection has been discussed among security experts for a long time. It is extremely easy to carry out – and there is currently no known technical solution.
In an indirect prompt injection attack, a third party modifies a website by adding hidden text designed to change the AI’s behavior. Attackers could use social media or email to direct users to websites with these secret prompts. Once this happens, the AI system could be manipulated so that the attacker attempts, e.g. B. to access the users’ credit card details. With the latest generation of AI models being deployed in social media and email, the opportunities for hackers are virtually endless, security experts warn.
When asked, neither OpenAI nor Meta wanted to comment on what exactly they are doing to protect users from prompt injection attacks and hallucinations caused by chatbots. Meta did not respond in time for publication, and OpenAI did not comment. Google said that the company released Bard as an “experiment” and that users could check Bard’s answers using Google Search. “If users see hallucinations or something that is incorrect, we encourage them to click the thumbs down button and give us their feedback.” In this way, Bard will “learn and improve,” the company said. The problem: This means that each user is responsible for identifying errors, and people tend to place too much trust in answers generated by a computer. Google only made an indirect statement on the subject of (indirect) prompt injection.
also read
Show moreShow less
The company only confirmed that the problem of prompt injection has not yet been solved and is still being “actively researched”. The spokesperson said the company has other systems, such as: B. Spam filters, used to detect and filter out attempted attacks. In addition, hacking tests and so-called red teaming exercises would be carried out to find out how malicious actors could attack products based on language models. “We use specially trained models to identify known malicious inputs and known unsafe outputs that violate our policies,” the spokesperson said.
Dangerous childhood diseases
It’s understandable that with any new product launch there will be initial teething problems. But it says a lot when even former fans of AI language models are no longer impressed. New York Times columnist Kevin Roose found that while Google’s Assistant was good at summarizing emails, it also told him about messages that weren’t in his inbox.
The conclusion can only be this: technology companies should not be so complacent when it comes to the supposed “inevitability” of AI tools. Normal people aren’t inclined to adopt new systems that keep failing in annoying and unpredictable ways – and it’s only a matter of time before hackers use these new AI assistants maliciously. We all seem to be easy prey at the moment. Everyone should think carefully about letting Bard & Co. access their data. If it can be avoided at all for a long time.
(bsc)
To home page
#Big #Techs #bet #assistants #risky