Don't Talk to Strangers
#TIPS4FAMILIES | Our common language around risk isn't translating fast enough to technology — and that's a problem. Here is why it's happening and what the challenges ahead look like...
It is amazing to think that such a simple phrase as “don’t talk to strangers,” or the more tongue-in-cheek “stranger danger,” holds meaning that is universally understood.
But how do we square this trope with the digital realm?
It’s unlikely that the “stranger” we immediately think of is a robot, right?
But it should be… because even the chatbots/AI assistants of today are putting our kids at serious risk lest we understand the pitfalls of these “relationships” — which will only get more complicated in the years ahead.
Human Bad Actors
Of course we haven’t been completely oblivious to the risk of kids interacting with strangers online. But this has generally taken the form of humans who deceive or exploit kids. We are savvier today regarding the risks (although they remain dire).
There is also, of course, ample discussion currently about the harms of social media. Many legislators have turned a keen eye toward predatory advertising, algorithms, and an overall addiction by kids to social media feeds.
But what social media regulation misses is a much larger risk that is brewing: Big Tech has moved on to AI and we are not preparing children to navigate non-human interactions that are coming from this new direction.
AI Chatbots & Fake Personhood
While AI chatbots/assistants have seemingly burst onto the scene, they have actually been taking form over the past 5-7 years.
Replika & Microsoft’s Xiaoice
While the power and success of ChatGPT launched these capabilities into the mainstream, conversational chatbots aren’t new. They gained a foothold thanks to the isolation inherent to the pandemic years and have maintained this grip.
Replika is a great example. It’s a conversational chatbot that is branded as a friend who is “always there for you.” Microsoft also launched a similar product in China, called Xiaoice. The chatbot has nearly a billion users in Asia and last year began testing out “clones.”
Whoa, clones?! Yes, these sanctioned clones or “deepfakes” of influencers are used to hawk products even when an influencer is offline. China is the world’s leader in facial recognition technology and the use in this capacity should concern all of us for many reasons (read my TikTok article to better understand).
AI Chatbot Concerns
What are the specific concerns the we should tackle right now?
The prevalence of new AI tools across just about every digital platform we (and our kids) use. Facebook, Instagram, Google, Snapchat — you name it. All have “AI assistants” built in and well before we’ve really had the chance to navigate their risk and utility.
The fact that these assistants require massive amounts of data to operate. We are not only feeding these tools with our behavior, but there will be an even more desperate need for more data which creates a cycle of luring us in and using our behavioral output to learn from and to grow.
The known emotional risks. Right now these AI assistants only mimic human conversation. There is, of course, no programmed relational or “sentient” ability currently. The bots/assistants do not understand harm, but they do leave us open to coercion and confusion.
Industry & Academia Response
In April of this year, Google convened a number of experts from academia and its businesses to publish a 274-page review of AI assistants, entitled “The Ethics of Advanced AI.” The content is serious but the coverage of the report is scant.
The biggest issue is how fast development in the space is moving, how serious the pitfalls are, and yet how oblivious so many of us are to these details. These tools are the very “strangers” that should fit into our understanding of the risks to our safety.
This is not to say we shouldn’t embrace new technology and AI innovation, but we should also fold it into the universal rules we have as humans to keep ourselves and our loved ones safe.
AI assistants may be increasingly human–like and enable significant levels of personalisation. While this is beneficial in some cases, it also opens up a complex set of questions around trust, privacy, anthropomorphism, relationships with AI and the moral limits of personalisation. In particular, it is important that relationships with AI assistants be beneficial, preserve autonomy and not rest upon unwarranted emotional entanglement or material dependence. (The Ethics of Advanced AI assistants)
What to Do Now?
I’ll continue to reference this report and dissect its elements in the future, but suffice it to say, the dangers to our psyches has gone under-discussed. So hopefully we can start to consider a world where we interact with the non-human and start to apply the same rules of safety and care in this realm as well.
A few tips:
Explain to kids that AI chatbots are not human… at all. Yes, really. A plain, simple conversation about how the delivery is meant to be “human like” but you should not expect to these bots to be “friends” or worry that the bot is “sad” or “mad” or emoting in any way at all.
Teach kids that our behavior to chatbots/robots does matter. How we relate to non-human conversational technology is reflection of our humanity. It’s also a good idea to reinforce this point as well. If kids are left to believe they can “abuse” computers it can extend to how they treat one another.
Watch for dependency and loneliness. Unfortunately new AI companions feed off of the lonely. And it will get worse. There is a risk of also fostering romantic feeling that can be empty as well. Could AI help address loneliness in a positive way? Possibly. It’s not hard to envision a tool built by experts that could help. But it’s not here in any capacity currently.
Consider the risks of coercion, deception, and “groupthink.” While just this sentence can send chills down the backs of overwhelmed parents, it’s a serious issue. Especially when left unchecked (as in undiscussed with kids), the risk of being coerced to act inappropriately, or even just bad actors reinforcing an idea or political belief that is destructive and inaccurate, is a concern.
What’s Old is New Again?
Remember this regular reminder to check in with kids? Maybe we need something similar to remind parents to ask about what children are experiencing online.
Much good will come of technology and there are some very simple, big picture things we can all do to set us all up for success: encourage critical thinking, listen to one another (humans that is…), build confidence, and teach understanding. For parents, simply talking to our children is the best start.