This Week: Archdiocese of New York's AI Warning Butts up Against School Social Media Content 🤔
Even well-intended, on-the-right-track concerns about *AI* these days often seem out of step with what it really means to protect ourselves and our kids. Let's start with those social media posts...
If your child is in a Manhattan parochial school, you may have received an AI *warning* flyer from the Archdiocese of New York. The document is a well-intended, clear, overview with reasonable and accurate concerns (and even throws in mention of AI’s benefits).
But…
Like many similar communications circulating about, these warnings often let adults and kids off the hook too easily by glossing over the actual day-to-day work required to mitigate AI’s risks.
College Choice Posts on Instagram
At the same moment our school shared the AI harms flyer, they were (like many schools) posting cute photos of students in their college sweatshirts on Instagram.
“Oh, come on,” you say. “Don’t take away our fun, surely these posts aren’t that problematic.”
But they are.
In the context of AI development and deepfake creation, personal details like where a child is going to college, what they are studying, and where they go to high school are juicy details for bad actors. Criminals can cross-check with a trove of additional detail available about families including home address, parent names, careers, hobbies. Even just a photo of a nameless teen in a college sweatshirt is enough.
Rite of passage? Well, it wasn’t a decade ago, so we need to adjust our expectations about what is reasonable today and tomorrow.
Digging Into the Detail
One of my heroes, in the digital safety warriors, is Lisa LeVasseur, Founder of the Internet Safety Labs. I caught mention of her saying this week on LinkedIn, a point I wholeheartedly agree with: we need to stop talking about AI in the broadest sense, and instead delve into the details of its different technologies.
Once you understand the specific applications, you’ll be more able to understand why any one piece of data or activity could be problematic.
From the Archive: Privacy and Deepfakes
➡️ Why girls are most affected by deepfake creation right now »
➡️ How to protect your family from the kidnapping AI voice scam »
Biggest News This Week
Google launched Veo 3 and it’s remarkable. The launch also takes us a step closer toward a complete disruption of our ability to determine what video content is *real* vs. AI generated.
The Wall Street Journal used the Google technology to create a short film and then explained how it was done. It’s worth viewing with your kids.
There has also been so much content created with Veo 3 in just the past week, it’s flooding an already *slop*-filled Internet. It’s a good time to review what that means »
Last Thought: Don’t Stop Asking Questions About AI
For this newsletter, I had the choice of inserting my own image(s) into the post, or I could “generate” visual content. Choosing the latter, and with one simple sentence— “show a diverse group of kids in superhero costumes playing with a robot”— out popped the below »
It’s pretty easy to say “cool” and move on. But we should all resist the urge to accept what we do not understand.
In my case, I wanted to know what technology Substack was using and where the training data came from.
Generally, any plugin or partner of any sort is clearly communicated by a commercial entity. But in this case, I struggled to get an answer. And so I used the “it takes one to know one” approach and asked Claude AI.
You might be interested in Claude’s response. I hope you’ll read it.
The biggest takeaway here: don’t stop asking how or why. Stay curious. It’s the single most important step we can take toward shaping the future.