Italy blocks AI app ChatGPT over privacy concerns: what happened?

Related

Share

Italy is standing up to AI (Artificial Intelligence) and in particular to ChatGPT. Apparently, the privacy guarantor has blocked the chabot because it was not complying with privacy regulations.

The news is also reported on Watcher.Guru’s official Twitter account, which reads:

AI problems: why did Italy block ChatGPT?

This Friday, Italy’s Garante per la protezione dei dati personali (Italian data protection authority) ordered, with immediate effect, the temporary restriction of the processing of Italian users’ data with regard to OpenAI, the US company that developed and operates the ChatGPT platform.

The authority has also opened an investigation against the company “invented,” initially as a non-profit, by Sam Altman and now led by Satya Nadella. Specifically, the measure states the following:

“The Privacy Guarantor notes the lack of information to users and all interested parties whose data is collected by OpenAI, but above all the absence of a legal basis that justifies the mass collection and storage of personal data, for the purpose of “training” the algorithms underlying the functioning of the platform.”

In other words, Italy accuses the software of collecting personal data without any justification. In this regard, some recent audits show that the information provided by ChatGPT does not always correspond to the real data, thus leading to an inaccurate processing of personal data.

No verification of the age of users in ChatGPT

This all starts from the fact that ChatGPT, the best-known relational AI software capable of simulating and processing human conversations, had suffered a data loss (data breach) on 20 March regarding users’ conversations and payment information of subscribers to the paid service.

In addition, the authority also points out the absence of filters for verifying the age of users, despite the fact that, according to OpenAI’s published terms, the service is aimed at those over the age of 13.

According to the Authority, since this is not complied with, it exposes minors to responses that are totally inappropriate with respect to their level of development and self-awareness.

OpenAI, which does not have an office in the Union but has designated a representative in the European Economic Area, must notify within 20 days the measures taken to implement what the watchdog has requested, under penalty of a fine of up to 20 million euros or up to 4% of annual global turnover.

The ChatGPT AI app responds to measures taken in Italy

At the moment, the ChatGPT software appears not to have been informed of its fate. In fact, when some users tried to ask the chatbot about the block suffered in Italy, they received the following response:

“I’m sorry to hear that the Italian Privacy Guarantor has blocked me. However, as a speaking model, I am unable to interact with the Privacy Guarantor or know the specific decisions he has made regarding my operations in Italy. I’m here to answer questions or provide information about my natural language processing ability, if you need help with that.”

It will probably take a few hours for the block to take effect, just as happened with the Replika app, the other chatbot deemed dangerous to minors and which was no longer allowed to use the personal data of Italian users.

Now, the problem mainly concerns companies that had subscribed to integrate the software into their work. However, the Guarantor does not clarify within the statement whether the block will also affect the Plus versions or only the one accessible without paying.

Not only Italy, even Musk warns about AI and ChatGPT

Recently, Elon Musk also stated that he believes it is necessary to take a break from AI. Specifically, the Tesla genius stated the following:

“Artificial intelligence poses profound risks for society and humanity: this is why it is important to take a six-month break and stop training the most advanced systems.”

In addition to Elon Musk, raising this alarm about AI is a large group of researchers and managers, a total of 1,000 in number, who have signed an open letter calling for a stop to artificial intelligence.

The open letter was published by the non-profit Future of Life. The missive includes other distinguished signatures in addition to Elon Musk’s, which is why its contents have been taken seriously.

In fact, the document was signed by Apple co-founder Steve Wozniak and managers of Pinterest and Skype, among others. However, not all AI services are under attack, but only the more advanced systems of GPT-4, OpenAI’s chatbot, which is able to tell jokes and pass bar exams with agility.

In fact, the open letter reads:

“Powerful AI systems should only be developed when there is confidence that their effects will be positive and their risks manageable. A runaway race to develop and deploy powerful digital minds that no one, not even their creators, can understand, predict and control.”