An intelligent system, which perfectly understands natural language, immediately feeds on a huge amount of data and has an answer to practically any question. ChatGPT is so good —especially in its latest version, GPT-4— that it only took a few months for the alarm bells to go off: How far can its influence go? Beyond moral debates, the other great concern that ChatGPT arouses is its not very transparent management of the information it processes.
“We don’t know what they really do with the data,” warns Borja Adsuara, an expert lawyer in digital law, “and not knowing what they do with the data is already a risk in itself.” Federico Vadillo, an Akamai security expert, shares the same opinion: “The use of ChatGPT presents risks in terms of data protection and regulatory compliance, especially in relation to the General Data Protection Regulation, the GDPR.” This expert goes further by warning that this “unauthorized” personal data could “be transferred outside the European Union.”
More information
The adventure of deleting personal data
“Systems based on artificial intelligence (AI) tend to behave like a black box: we know what happens at the end of the process, but we don’t know how the software learns and makes decisions,” warns Adrián Moreno, a cybersecurity expert. “This phenomenon poses challenges of understanding, control and ethics. That is why it is convenient to establish a regulatory framework in order to address these challenges to allow a responsible and safe development and use of AI ”, he adds.
Moreno recalls that “article 15 of the GDPR establishes that ‘the interested party shall have the right to obtain confirmation from the data controller as to whether or not personal data concerning him or her is being processed'”, and in this sense, the way in which the systems AI-based use of information is “a source of concern”.
Faced with this reality, how to delete all personal information? The bad news is that there is no automated or immediate way to delete stored information. That is, the conversations that are maintained with the system are recorded, along with the user’s name and registration data, as announced in the company’s privacy policy. As there is no immediate or automated way to access and delete the information that is consulted, OpenAI has provided a form that the interested party must fill out to request the data information. And here comes the worrying: the user is forced to accept a clause in which OpenAI warns that it may not be able to remove this information.
This is, after completing a complex form, in which it is irrefutably demonstrated (even with screenshots) that the system is offering information about oneself and the reasons for requesting its removal. This document must be completed with the real data of the interested party, who, in addition, must “swear” in writing to the veracity of what is stated. And to make things even worse, if possible, this form informs the user that the information filled in could be checked against other sources to verify its veracity.
What is done and how is this information used? OpenAI maintains that it does not exploit personal data for purposes other than improving its systems, but, on the other hand, in its privacy policy it warns that the data can be “transferred to third parties without notifying the user.” In short, OpenAI offers a product with a high added value but at an even higher cost for the user: ignorance about the processing of their data. It should also be remembered that with the information consulted in ChatGPT it is easy to create a user profile and it is very tempting to exploit it. Faced with this dilemma, and if you do not want to take risks, the most prudent thing to do is to make conscious use of what is being consulted or, being more radical, not use the tool until you adhere to regulations such as the GDPR: “The best thing that do not enter personal information, just in case”, advises Adsuara.
You can follow EL PAÍS Tecnología on Facebook and Twitter or sign up here to receive our weekly newsletter.