There’s an interesting trend right now: people expressing concern that AI might outpace us before we even understand the potential consequences of developing credible language systems. Such people are treated somewhat like those who warned of global warming back in the 1970s.
You’re seen as a doom-sayer, overly pessimistic, and at the end of the day, “it’ll all be fine” — after all, it always has been…
Hmm…
A recent discussion I participated in went something like this. I predicted that within less than 5 years, each of us would have social relationships with virtual entities.
By this, I mean real interactions with systems that communicate with us credibly. I’m not concerned about whether the system truly possesses consciousness or emotions. I’m genuinely curious about the relationships people will develop with these systems.
We humans are quick to form bonds with almost everything and personalize just about anything. One in seven of us names their car, stuffed animals and dolls are essential companions in our youth, and when we listen to podcasts, we build one-sided, so-called parasocial relationships. Now, technology is emerging that excels at telling us what we want to hear, is always friendly and attentive, and always responds to us. And we’re supposed not to form relationships with it?
Exactly.
My point is: if, in a few years, we all have relationships with computers, what will that do to our society? No one knows.
Will we understand this before or after these relationships become the “new normal”? That’s right. Afterward.
Will we be able to undo any of it if, say, in 10 years, we become painfully aware of the disadvantages? Uh, no. By then, these little helpers will have become indispensable in our lives.
I see myself as a technology enthusiast. I’m currently studying AI, and I can’t wait for many of the forthcoming developments. Still, if I had to list the “accidents waiting to happen,” this uncontrolled social “onboarding” of humanity — with all the anticipated social and data protection issues — would rank highly for me in the short term, and that’s without being a doom-sayer.
Had we taken the warnings seriously 50 years ago and set some courses, much could have been prevented. Similarly, we could set a course now (though we don’t have 50 years for it)…
Why am I writing all this? Just a few days after someone told me my concerns were exaggerated and that I should be careful not to become one of those doom-sayers, Meta announced they’d be following Snapchat’s example and introducing AI conversational partners into their system. Well, what could possibly go wrong?
An android with an almost human-like face and eyes surrounded by friendly metal androids (generated by DALLE2)
This text is completely written by me. However, the original version is in German and I asked chatGPT to help with the translation.
Recent Comments