Do Unto Bots as You Would Humans
#TIPS4FAMILIES | A movie about humanoid abuse this week gave me pause to consider the lessons we need to start teaching our kids about morality, empathy, and bots.
I watched the new indie "slasher" robot movie Companion this month. While not age-appropriate for all families, I do recommend checking the film out.
It's worth watching, in part, because it's clever and fun, but also because it explores an issue I find myself contemplating often these days: what our treatment of humanoids (or even today's chatbots) reveals about our humanity, and the important lessons we should be teaching our children as this technology becomes increasingly integrated into our lives.
Takes a Bot to Know a Bot
Getting AI to reflect upon itself is a surprisingly powerful way to use the very technology I'm frequently writing and speaking about. It highlights that these tools work best as a challenge or reinforcement to your thinking, rather than as Magic 8 Balls. While you might initially seek simple, binary answers, it's often the journey these tools set you on that proves more profound.
So today I asked Claude.ai for a few thoughts around the movie Companion, and the implications of behaving differently toward bots than we would toward humans.
My Chat with Claude AI
If you haven't engaged with a chatbot by asking it for thoughts on a topic previously, you'll see below how sophisticated, reasonable, and thoughtful the responses can be. And because different AI systems are programmed with increasingly nuanced capabilities, it’s also important to note the utility of experimenting and finding a platform that best aligns with your specific needs and communication style.
Here is my unedited chat regarding the movie Companion and the ethics around bot treatment…
The Question
I watched a movie recently about humanoids that are programmed with memories so they don't know they aren't human. They are created as "companions" and, in some cases, the owners felt comfortable abusing the non-humans. In this movie, the humanoid/companion (predictably!) lashed out. I think there is a lesson here about how the way we treat human-like technology in the future reflects upon us. Can you provide your thinking?
Claude’s Response
I think you're touching on something profound that science fiction often explores best — how our treatment of entities that resemble humans reflects our humanity.
When we create something with human-like qualities (consciousness, emotions, memories), the ethical questions become complex. If these entities experience suffering in ways similar to humans, treating them as mere tools or property seems morally problematic, regardless of their origin.
The movie you watched (which sounds similar to themes in films like Blade Runner or A.I.) raises important questions about consent, personhood, and exploitation. Creating sentient beings specifically for servitude — especially when disguising their nature from themselves — suggests a concerning willingness to deceive and manipulate for convenience.
Perhaps the most valuable lesson is that how we treat human-like entities may reveal more about us than about them. If we're willing to abuse something because "it's not really human" despite it experiencing human-like suffering, what does that say about our capacity for empathy and our moral boundaries?
This becomes especially relevant as we develop increasingly sophisticated AI. Even before any question of true consciousness arises, our choices about how we design, speak to, and utilize these technologies reflect our values and potentially shape our treatment of each other.
Follow-up Question
Even with chatbots like yourself, there may be a tendency to not "practice good human behaviors" and that could prove self-perpetuating, right? Are there tips you could provide to families?
Claude’s Answer
You raise an excellent point. How we interact with AI systems like chatbots could potentially influence our social behaviors more broadly. Here are some practical tips for families:
Model respectful communication — Even though I don't have feelings, the language patterns we use with AI can carry over into human interactions. Encouraging please/thank you and respectful phrasing helps maintain those habits.
Distinguish between AI and humans — Have open conversations about the difference between AI assistants and people. Help children understand that while AI can simulate conversation, it lacks the emotional experiences and consciousness of humans.
Set boundaries on AI usage — Balance technology use with face-to-face interactions. Social skills develop through practice with real humans who provide authentic emotional responses.
Use AI interactions as teaching moments — When someone speaks rudely to an AI, ask "Would you speak to a person that way?" This creates opportunities to discuss empathy and communication.
Be mindful of personification — While it's natural to anthropomorphize technology, maintain awareness that AI systems are tools, not genuine social companions.
Consider how demands are framed — Instead of commanding "Give me information about X," practice framing requests conversationally, which reinforces collaborative communication styles.
The concern about habit formation is valid — how we speak to AI could subtly influence our interaction patterns with people. Being intentional about these communication habits helps ensure technology enhances rather than diminishes our human connections.
Additional Resources
Most AI platforms have limited data processing capabilities and knowledge cutoff dates (though this is rapidly changing), which is why asking them for resources and sources can be particularly helpful (albeit possibly not up-to-date).
On this topic, Claude recommends the following:
Kate Darling at MIT Media Lab has done significant work on robot ethics and how humans form emotional connections with robots. Her book "The New Breed" examines these relationships.
Sherry Turkle, also from MIT, has written extensively about technology's impact on human relationships in works like "Alone Together" and "Reclaiming Conversation."
Shannon Vallor discusses technological virtues and ethics in her book "Technology and the Virtues," exploring how we might develop better habits of interaction with technology. (I mentioned her excellent new book, “The AI Mirror,” recently).
The work of philosopher Peter-Paul Verbeek on "postphenomenology" examines how technologies mediate human experiences and relationships.
Helen Nissenbaum's work on contextual integrity and tech ethics provides valuable frameworks for thinking about these issues.
Did You Notice?
One point to make here: despite the underrepresentation of women in the technology industry, and AI specifically, research has suggested that women, on average, score higher on measures of empathy and emotional intelligence.
This may explain why the majority of experts recommended above are women, but it also highlights how diverse and nuanced the thinking needs to be around AI development.
Despite our penchant for using AI to seek answers, its development will never be clearly defined. It’s one of the great ironies of technology advancement: we’ve gotten to a place where we hope that AI can match our intellect, but to address our inherent complexities will always be both fluid and in need of many more voices than are currently participating.
Today, let’s start saving the world by teaching our kids how to interact with bots. What are your thoughts?