ChatGPT: Not sentient, but rather devious…

AI chatbots like ChatGPT and Replika, have certainly captured the public’s imagination in recent months.  It seemed like we had been waiting an eternity for the Turing Test to be passed, and then one day before Christmas last year, ChatGPT launched (and the scientific community also had a breakthrough with nuclear fusion); a big day indeed!

The former, plus various tools like it, have gained mainstream attention, not just for the ease in which they automate tasks and provide ‘insight’, but also for their lifelike communications.

For the average online user (i.e., not tech geek) it is not particularly well understood that these tools are just very clever word matching algorithms, not true ‘intelligence’.  They produce the statistically best set of words, from a giant possible combination pulled from internet sources, in response to the words humans have entered as input.  This is of course a  wild simplification of what’s really happening, and the results it produces are quite staggering in many instances. In fact these machines can seem scarily alive (even this Google engineer was convinced).

However, in all the excitement around what these machines can do for us, and pondering how alive they really are, one rather large point was missed. That of user data.  All the focus was on the data ChatGPT et al could provide, but no one seemed to be asking what data are these bots capturing from us?

Turns out a lot.

So much so that Italy’s national privacy regulator has just ordered a ban of ChatGPT over alleged violations of data privacy law (LINK), accusing creators OpenAI of “unlawful collection of personal data”.  It says there isn’t “lawful justification” for all the personal data collected, as well as having issues with the lack of age restrictions on the tool which could potentially expose minors to unsuitable materials.  

OpenAI has been given 20 days to communicate what measures it is taking otherwise it faces a huge 20 million Euro fine.

So next time you turn to a chatbot to answer a question, or maybe even attack your kid’s homework, take a minute to ask yourself if you know how the chatbot may be monitoring you in turn for the service.  

Chatbots like ChatGPT aren’t sentient, but they do appear to be a little devious… maybe that makes them just a little bit human after all.

References: https://www.theverge.com/2023/3/31/23664451/italy-bans-chatgpt-over-data-privacy-laws