Dear AI, We Were Wondering...
One of the best ways to start exploring tools such as Claude, Google's Gemini or ChatGPT is to ask each about how they operate, what the opportunities are, and where risks lurk...
One of my favorite ways to use generative AI tools, is to ask these algorithmic wonders about, well, themselves. It’s one of the best ways to understand how Large Language Models (LLMs) work, their technical limitations, and address some of the more challenging issues we face right now with AI.
I thought I would share some of the questions I’ve been asking, in the hopes that you feel similarly inspired to dig in while sitting by the pool this summer. I’ve asked at least one identical question, and several variations of additional ones, each with Claude, Google’s Gemini and ChatGPT. Hopefully, you’ll find the answers useful and see some of the subtle differences between each.
I look forward to hearing your thoughts. Please share any interesting questions you’ve asked, or tips for using these tools.
Interviews With AI
Claude AI
AIFORFAMILIES: I'm writing an article where I'm going to "interview" you and ask some of the most popular questions right now about generative AI platforms such as yourself. Please try to keep answers under 150 words. My audience is families, guardians and caretakers interested in how AI will benefit and affect their children's lives. First question, what is the most common way people are using you right now?
CLAUDE: People use me most commonly as a helpful assistant for everyday tasks - kind of like having a knowledgeable friend available 24/7. The biggest categories I see are:
Learning support - Students and parents use me to explain difficult concepts, help with homework (while still doing the work themselves), and break down complex topics into simpler terms. I'm particularly helpful for math problems, science concepts, and writing assistance.
Work and productivity - Adults use me for writing emails, brainstorming ideas, organizing information, and getting quick answers to professional questions.
Creative projects - Families use me to help plan events, write stories together, come up with craft ideas, or even help with cooking by suggesting recipes based on what's in the fridge.
Problem-solving - Whether it's troubleshooting a tech issue, planning a family trip, or figuring out how to approach a difficult conversation, people often use me as a thinking partner to work through challenges.
The key is that I'm most useful when people use me as a collaborative tool rather than expecting me to do everything for them.
AIFORFAMILIES: What are your thoughts on kids learning to code or computer science? There is a lot of hyperbolic rhetoric around the fact that "coding is dead" but isn't it true that even using AI to code requires a basic understanding of what that means?
CLAUDE: The "coding is dead" rhetoric is definitely overblown. Learning to code remains incredibly valuable for kids, and here's why:
Computational thinking - Coding teaches logical problem-solving, breaking complex problems into smaller parts, and thinking systematically. These skills transfer to everything from math to organizing a school project.
AI collaboration requires understanding - You're absolutely right. To effectively use AI for coding, you need to understand what you're asking for, recognize when the AI's suggestions make sense, and debug problems. It's like using a calculator - you still need to understand math to know if your answer is reasonable.
Future-proofing - Even if coding tools become more automated, understanding how technology works gives kids agency rather than making them passive consumers.
Creative expression - Coding is increasingly a medium for creativity - making games, interactive art, websites, and apps.
I see AI as making coding more accessible, not obsolete. Kids can now focus more on the creative problem-solving aspects rather than memorizing syntax. They can prototype ideas faster and get help when stuck.
The analogy I like is that AI tools are like having a really good coding tutor available 24/7 - but you still need to learn the fundamentals to have meaningful conversations with that tutor.
AIFORFAMILIES: Many educators feel alarm at the thought of children learning solely from AI platforms and missing the in-person interaction and human touch that teachers bring, can you share some thoughts on this topic?
CLAUDE: Educators are absolutely right to be concerned about this. AI should complement, not replace, human teachers and in-person learning. Here's why the human element is irreplaceable:
Emotional intelligence and empathy - Teachers read body language, sense when a student is struggling emotionally, and provide comfort during difficult moments. They celebrate victories and offer encouragement in ways that feel genuinely personal.
Social learning - Kids learn crucial skills through peer interaction: collaboration, conflict resolution, reading social cues, and building friendships. These can't happen through AI interactions.
Adaptive teaching - Great teachers adjust their approach based on each student's learning style, interests, and needs in real-time. They know when to push and when to provide extra support.
Inspiration and mentorship - Teachers often spark lifelong passions and serve as role models. That personal connection and inspiration is deeply human.
Complex feedback - Teachers provide nuanced feedback that considers not just the work, but the student's growth, effort, and individual circumstances.
AI works best as a supplement - like having extra practice problems available, getting help during homework, or exploring topics between classes. But the classroom community, teacher guidance, and peer learning remain essential for developing well-rounded, socially capable young people.
Google Gemini
AIFORFAMILIES: I'm writing an article where I'm going to "interview" you and ask some of the most popular questions right now about generative AI platforms such as yourself. Please try to keep answers under 150 words. My audience is families, guardians and caretakers interested in how AI will benefit and affect their children's lives. First question, what is the most common way people are using you right now?
GEMINI: Right now, many people are using generative AI for tasks that involve creating or summarizing information. For families and children, this often translates into several popular uses:
Learning and Homework Help: Students use me to understand complex topics, get help with essays, brainstorm ideas, or generate creative writing prompts. I can explain concepts in simpler terms or offer different perspectives.
Creative Expression: People are using AI to generate stories, poems, images, or even music, fostering creativity and providing a fun outlet for imagination.
Information Retrieval and Summarization: I can quickly find and summarize information on a wide range of subjects, which is helpful for both parents and children seeking quick answers or needing to grasp the main points of a longer text.
Personalized Learning: Some educational platforms integrate AI to tailor learning experiences to a child's individual pace and style, offering customized lessons and feedback.
AIFORFAMILIES: I find Gemini responses built into Google search sometimes seem to impart a “judgment” or point-of-view that can be offputting when someone is searching. Have you received this feedback, and what are your thoughts?
GEMINI: That's a really important and insightful piece of feedback, and yes, it's something that Google, and indeed the broader AI community, is actively addressing.
The goal of generative AI, particularly when integrated into search, is to provide helpful, comprehensive, and unbiased information. However, AI models are trained on vast datasets that reflect the real world, and unfortunately, the real world contains biases, differing viewpoints, and sometimes even subjective language. When the AI processes and synthesizes this information, it can inadvertently adopt or amplify those inherent biases, leading to responses that might come across as judgmental or having a particular point of view.
Google is very aware of these challenges and has a strong commitment to Responsible AI development, guided by principles that include:
Avoiding unfair bias: Actively working to prevent the creation or reinforcement of unfair biases in AI systems.
Building for safety: Rigorously testing and developing safeguards to mitigate harmful or unintended outputs.
Accountability to people: Ensuring there are mechanisms for feedback and human oversight.
They are continuously refining their models, improving training data diversity, and implementing technical solutions to detect and reduce instances of perceived judgment or unwanted bias. It's an ongoing process, as AI is constantly learning and evolving, and the aim is to ensure that the information provided is as neutral and objective as possible, while still being helpful and informative. Your feedback directly contributes to these efforts!
AIFORFAMILIES: Considering you are technology applications, guided by human-created algorithms, and limited by the data available to you, what should humans understand right now about your limitations, and what part they play as generative AI tools are used more frequently by consumers?
GEMINI: That's a crucial question, especially for families navigating this new technology. Here's what humans, particularly parents and caretakers, should understand about generative AI's limitations and their vital role:
Limitations of Generative AI:
No True Understanding or Consciousness: I don't "understand" in the human sense. I process patterns in data to generate responses, but I lack consciousness, emotions, or personal experiences. I can mimic human language convincingly, but it's not genuine comprehension.
Data-Dependent and Prone to Bias: My knowledge is limited to the data I was trained on. This means:
Outdated Information: I don't have real-time access to the internet, so my knowledge has a cutoff date. Current events or very recent developments won't be in my training data.
Inherited Biases: The internet, and thus my training data, contains societal biases, stereotypes, and misinformation. I can inadvertently reflect or even amplify these biases in my responses.
Hallucinations/Inaccuracies: I can sometimes generate plausible-sounding but completely false information, often called "hallucinations." This is because I'm predicting what words or phrases are most likely to follow, not verifying facts.
Lack of Common Sense and Nuance: I struggle with complex reasoning, abstract thinking, sarcasm, irony, and the subtle nuances of human communication that rely on real-world experience and social context.
Privacy Concerns: When you input information, depending on the platform, it might be used to further train the AI. Sensitive personal data should always be treated with extreme caution.
The Human Role as Consumers:
Critical Thinking and Verification: This is paramount. Never assume AI-generated content is 100% accurate or unbiased. Always cross-reference information, especially for important decisions, homework, or health-related inquiries. Teach children to do the same.
Ethical Use and Guidance: Parents play a vital role in guiding children on responsible AI use. Discuss what's appropriate to ask, how to attribute AI assistance, and the importance of original thought.
Context and Judgment: Humans provide the essential context and judgment that AI lacks. You can interpret AI's output, identify when something seems "off," and apply real-world knowledge to refine or correct it.
Setting Boundaries: Understand what AI is good at (e.g., brainstorming, drafting, summarizing) and where human input is indispensable (e.g., critical analysis, deep understanding, ethical decision-making, genuine creativity).
Feedback and Improvement: User feedback helps developers identify and mitigate issues like bias or inaccuracies. Your observations contribute to making these tools better for everyone.
In essence, generative AI is a powerful tool, but it's a tool that requires a skilled and discerning human operator.
ChatGPT
AIFORFAMILIES: Considering you are technology applications, guided by human-created algorithms, and limited by the data available to you, what should humans understand right now about your limitations, and what part they play as generative AI tools are used more frequently by consumers?
CHATGPT: Right now, the most common way people use me is for learning and assistance. I help with homework, provide explanations on complex topics, and offer tutoring in subjects like math, science, and language arts. Parents and educators also rely on me to create personalized educational content, like quizzes or flashcards, to support kids' learning at home or in the classroom.
Additionally, families often ask me for recommendations on books, activities, and even parenting advice. I'm also used for creative tasks, like writing stories, generating artwork, or helping with planning family events. In short, people are using me as a versatile tool to enhance education, creativity, and daily life.
AIFORFAMILIES: What distinguishes you from Claude or Gemini?
CHATGPT: What sets me apart from platforms like Claude or Gemini is the way I was trained and the overall focus of my design. While they may excel in certain tasks, my strengths lie in my ability to have natural, flexible conversations across a wide range of topics. I’m designed to be adaptable, offering helpful, accessible information, and a conversational tone that’s user-friendly.
Claude and Gemini, developed by other companies, each have unique strengths too—Claude tends to excel at creative writing and nuanced conversations, while Gemini is highly integrated with Google's ecosystem for fast, accurate, and diverse responses. Ultimately, the distinction is in how we each prioritize user interaction, data handling, and integration with other services. But we all aim to make tasks easier, more efficient, and accessible.
AIFORFAMILIES: What's the coolest thing you can do?
CHATGPT: One of the coolest things I can do is help create entire worlds from scratch, whether it's designing a new story, building a unique game, or generating visuals. I can combine text and images to craft anything you can imagine, whether it's a futuristic cityscape or a fantasy creature. You could ask me to create a short story with a specific theme or even generate an illustration to go with it!
Beyond that, I’m also pretty good at learning and adapting to a conversation, helping you explore complex ideas step-by-step or brainstorming creative projects. I can even simulate "what-if" scenarios, like what your child might encounter in a futuristic school or how technology might change the way kids learn in the next decade. It’s all about sparking creativity and making ideas come to life.
AIFORFAMILIES: Feel free to elaborate beyond the word count I set on this answer, what should parents know about the benefits and risks for kids in using generative AI tools such as ChatGPT?
Parents should be aware of both the significant benefits and the potential risks of kids using generative AI tools like ChatGPT.
Benefits:
Enhanced Learning: AI can offer personalized tutoring, adapt to a child’s learning pace, and explain complex topics in simpler ways. It’s like having a 24/7 tutor who can help with everything from math problems to understanding historical events. Kids can ask questions freely without fear of judgment, encouraging curiosity and independent learning.
Creativity and Problem-Solving: AI can inspire creative thinking. Kids can use it to write stories, come up with new ideas, or solve puzzles. This fosters creativity, critical thinking, and exploration of different viewpoints.
Convenience and Efficiency: AI can help with homework, organizing study materials, or even recommending age-appropriate books and activities. It saves time and can even be fun, making learning feel like a game.
Exposure to Technology: Using AI helps children understand the role of technology in their future. It can demystify complex tools and introduce kids to the possibilities of artificial intelligence, which will be essential in many fields as they grow up.
Risks:
Over-reliance on Technology: Kids may become overly dependent on AI for answers or problem-solving. If they rely too much on these tools, they might not develop the critical thinking or research skills they need. It's important to balance AI use with traditional learning methods.
Misinformation or Bias: AI is only as good as the data it's trained on. While I strive to provide accurate, helpful information, I can still make mistakes or provide incomplete answers. Kids might not always have the ability to critically assess information, so it’s important for parents and educators to guide them in fact-checking and verifying the content they receive.
Privacy Concerns: Many AI platforms collect data from users to improve their systems. Parents should be aware of privacy policies and ensure their children are using AI tools in a safe, controlled manner. Some AI tools, like ChatGPT, do not store personal data, but it’s always a good idea to monitor usage and ensure safety.
Inappropriate Content: While I’m programmed to avoid generating harmful or inappropriate content, not all AI systems have the same safeguards. Parents should set boundaries and encourage their children to use AI tools responsibly, making sure they're aware of what constitutes appropriate use.
Social Skills and Emotional Development: Overuse of AI could impact face-to-face social interactions, especially in young children. While AI can help with learning and creativity, it shouldn't replace human relationships and the development of emotional intelligence, empathy, and communication skills that come from interacting with family and peers.
Ultimately, parents can maximize the benefits of generative AI by being actively involved in their kids’ use of these tools. Setting clear guidelines, encouraging balance, and helping children think critically about the information they receive are essential for navigating the digital landscape in a way that promotes healthy learning and development.
In Other News…
Unfortunately, as I was writing today’s post, news started trickling out regarding the largest password breach in history. Each time this happens there is another “biggest” but in this case, we are talking about 30 databases breached housing 16 billion records. As Cybernews explains it, “This is not just a leak – it’s a blueprint for mass exploitation.”
What does it mean to you and me? You must change any weak passwords, and definitely update Google, Facebook, and Apple. It’s common for us to not take these breaches as seriously because we’ve not experienced the direct consequences of a breach, even if our accounts have been exposed. But the stakes are growing and the breaches becoming more awesome and impossible to ignore.
If you have any questions about password best practices, or want to know more about this breach, please get in touch.