The Unintended Consequences of Social Media Bans and Other AI News This Week
As technology innovation races forward many of us crave easy answers. We want there to be a "good" or "bad" guy, or a "side" to join. Unfortunately in an AI-led world there are no easy answers...
There is nothing more unsettling than wanting an answer regarding what’s best for our children and coming up empty-handed. This uncertainty is precisely why emotionally-triggering headlines and social media tribalism can so firmly capture our attention. But there are no easy answers. And even if there ever were (doubtful), AI has turned the digital realm into shades of grey.
Take, for example, headline-grabbing news around social media bans, such as the one enacted this week in Australia, or the AI detectors that are increasingly being employed in schools to “catch” cheating.
Neither is quite what it seems, and that is why both are excellent examples of the need for families to take the time to really understand risks, intentions, and consequences across every issue.
Australia’s Social Media Ban
You may have heard that Australia has banned social media for kids under the age of 16. You wouldn’t be blamed for immediately cheering on the ban, as on the surface it presents a tidy solution to a gnarly problem.
And the complexities and realities of the ban are definitely being debated in the news, with teens, and within parenting groups. They range from whether the ban limits free speech for kids and represents government overreach to whether it will push kids to dark digital corners as they seek workarounds (it’s already happening).
Unfortunately, in the age of AI there is now also a layer of risk that hasn’t been getting enough attention but that families also need to know about: Age verification technology.
Risks of Biometric and Sensitive Data Collection
As social media platforms face heightened scrutiny, or must comply with full bans such as the one in Australia, there is a natural desire to seek digital solutions to manage and mitigate risk. In this case it means facial scans that seek to algorithmically “determine” age.
But these “solutions” also means an entire “shadow” industry that collects the kind of information we general only share with critical, well-regulated parties such as banks, hospitals, etc.
Yes, we are increasingly subjected to video surveillance and facial scans in many places these days. And, yes, it can feel like a losing battle. But we need to think carefully each time we share any of our data. And in this case consider whether this is the right approach at all.
Facial data is the very coveted ingredient for creating deepfakes and other AI-generated fraudulent content. This information can be combined with other data to control or steal from us. The implications can be dire, and we need to be smart about how and when we share this information. Red flags around this approach are already being raised by cybersecurity and privacy experts.
As Sky News reported: “Cybersecurity experts, tech executives and politicians from across the aisle have warned the Albanese government’s social media ban for under 16s could do more harm than good. The laws, intended to protect children from online harms, have instead created a complex web of privacy, data, and scam risks for both minors and adults.”
See, everything has tradeoffs. But some of the most problematic situations arise when “virtue” (e.g., keeping kids under 16 off of social media) comes at a significant “cost” (e.g., free speech, cybercrime, surveillance).
Another excellent explanation from the Electronic Frontier Foundation:
“… All online data is transmitted through a host of third-party intermediaries, and almost all websites and services also host a network of dozens of private, third-party trackers managed by data brokers, advertisers, and other companies that are constantly collecting data about your browsing activity. The data is shared with or sold to additional third parties and used to target behavioral advertisements. Age verification tools also often rely on third parties just to complete a transaction: a single instance of ID verification might involve two or three different third-party partners, and age estimation services often work directly with data brokers to offer a complete product. Users’ personal identifying data then circulates among these partners.”
We must go deep on every issue no matter how “good” it sounds. Families need to consider what’s best for themselves and not simply for governments, industry, and other third party interests.
AI Detectors and the Loss of “Personal Voice”
This is another area that’s worth exploring. It’s not one news story or big event, but little issues that are stacking up each day. And it’s similarly multi-layered to the topic of age verification.
For the past five years, kids have been thrust into the world of educational technology at a speed we weren’t prepared for. And mostly that thanks to the pandemic lockdowns (ironically, the longest and most restrictive lockdown periods was in Australia). Kids are far better than adults at adopting and understanding new technology. So we have encouraged this, but now with AI we’re trying to put the emergency brakes on.
The problems with AI detectors are numerous. They have too high a false positive rate. And there is also no way to “detect AI”—just an analysis of patterns of AI output that are modeled against a kid’s work (and notably, those “patterns” come from analyzing popular ways people write via the data used to train AI). Most worrisome is that in this cat-and-mouse game of detection, even for kids who don’t use AI at all, there is a loss in what it means to have personal voice.
My daughter told me that her peers debate which words might sound “too smart” or are most likely to be flagged by AI detectors. And this is happening even with kids who aren’t using AI for their work at all.
So are we losing the point of why to even write? Is this highlighting how little time is spent in teaching personal voice? Yes, I think so.
There is a lot more to think about here than just “are kids cheating,” and we need to dig in and really sit more deliberately with the topic.
AI in Schools Going Forward
Heading into 2026, there will no doubt be many more conversations around AI in the classroom. I’ve been lucky to be involved in some of these important discussions and have a few thoughts to share on the subject…
First, having spent the year writing about AI and kids and poring through academic papers and the work of many important voices, I’ve come to this personal conclusion: We need better technology in the classroom, kids must be more proficient in its use, and far more time should be spent in offline activities (for all of us).
The irony here is that the skills required to survive and thrive in the future require the type of critical thinking and interpersonal skills that are developed in-person and offline.
We need kids to read books, debate, build, play outside, develop empathy, daydream, and sit in silence. We shouldn’t confuse meaningless technology with the type of opportunity AI can bring. We should be talking now about quality over quantity.
Something important to remember when considering what’s in the classroom too: Most edtech platforms include some form of “AI” technology and have for years. These include features such as machine learning, pattern recognition, automated classification, algorithmic inference, statistical modeling, and adaptive pathways.
So, the best approach is to focus on the outcome and the impact, not on the technology alone—and also recognize how personal and local these decisions will ultimately be. Technology is not going away, but we can shape its impact.
What are your concerns about technology in school? Have you had a chance to give it some consideration?
If you haven’t already checked out my book and would like a copy, I’m giving away three for the holidays🎄🕎☃️. You can contact me below. Also, if you have read the book and are willing to write a review on Amazon (it can be short!) I would be grateful too :)






Excellent analysis on age verification tradeofs. The biometric data collection angle is something most folks celebrating the ban arent thinking about. I consulted for a school district last spring implementing AI detectors and watched kids literally start writing worse to avoid false positives, dumbing down vocabulary and sentence structure just incase. The shadow industry point is spot on, once facial data gets into these third-party verification networks, its basically impossible to claw back.