Our AI Fears Have A Social Media Problem
We're having the wrong arguments, in the wrong places, and AI is driving all of it. With the media also incentivized to keep us at war, we do have an "AI problem"—it's just not the one you think...
I was reading an article that said users dislike AI more than social media. And it really gave me pause—not because the sentiment surprised me, but because of what it revealed about how far off track the conversation has gone.
First, it showed the media’s growing appetite to frame AI as something to fear, to fight, to pick a side on. “We all know how hated social media is… so look what’s even worse now…” is a very telling, but unfortunate, editorial choice.
It’s perhaps understandable for an industry whose business model is being disrupted by the very technology it’s covering. But we need better right now. We need clear, well-considered arguments to move us forward.
But here’s the bigger problem with that narrative: The idea that people dislike AI more than social media fundamentally misses the point that AI drives much, if not all, of what we actually experience on those very social media platforms.
Social media isn’t competing with AI. Social media is AI.
Say What? Let’s Break it Down…
That feed you’re scrolling? It’s not organized chronologically or randomly—it’s a ranking model trained on billions of interactions, predicting what will keep your eyes on the screen longest. Every time you linger on a post half a second longer than another, you’re feeding an algorithm. It’s AI, and it’s extraordinarily good at keeping us hooked.
Those ads that feel eerily relevant? The ones that make you think your phone must be listening? It’s not listening—it’s something more precise. AI systems are cross-referencing your behavior, location patterns, purchase history, and the content you interact with to build a targeting profile and serve you exactly what you’re statistically likely to click. A precise mathematical event, not a coincidence.
The recommendations work the same way—the “you might also like,” the suggested follows, the autoplay next video. All driven by deep learning models that have studied what people “like you” consumed next.
Content moderation at scale is impossible without AI. The posts that get flagged, the accounts suspended, the comments that quietly disappear—those decisions are being made largely by machines, not humans.
And the content itself is increasingly AI-assisted. Captions, thumbnails optimized for click-through, even suggestions around the best time to post are often used by creators who don’t realize they are “using AI.”
Some creators aren’t even human. AI bot profiles are proliferating across every major platform. Users believe they’re engaging with a real person. Often, they’re not.
Perhaps most mind-bending of all: every interaction you have, including every like, linger, comment, and scroll, is a data point being fed back to make the platform’s AI even smarter. Including the very chatbots so many people are busy arguing against right now.
It’s Similar With Schools…
As the debate on AI heats up, the situation in education mirrors the broader confusion almost exactly.
In a typical US school district, there can be thousands of different apps in use. A yearly study by Instructure has found that districts are using up to 3,000 tools across classrooms and administrative systems. That number alone should reframe how we think about this debate. Increasingly, the majority of these tools also rely on AI working quietly in the background.
What’s changed is that AI is no longer arriving as a headline feature—it has become part of the core infrastructure of educational technology itself. It’s shaping how student data is processed and analyzed, how content is selected and sequenced, how feedback is generated, and how learning pathways are personalized to individual students.
Platforms like Google Classroom, Canvas, and Khan Academy already embed adaptive algorithms that adjust what a student sees based on their prior performance. AI flags struggling students before a teacher may notice. It grades writing drafts, suggests resources, and tracks engagement patterns across entire schools.
Most public discourse gets stuck on chatbots, such as whether students should use ChatGPT for an assignment. But that’s a narrow lens on a much larger reality.
The fact that AI that systematically underpins nearly all modern edtech today is the conversation that deserves equal, if not more, serious attention right now.
None of this is inherently bad. These tools, used well, can extend what a skilled teacher can do for students. But it does mean that those who are calling to “remove AI from the classroom” are (even unknowingly) really suggesting we should eliminate much of modern technology from the classroom altogether.
That’s a position that can be defensible, but it’s worth being honest about first.
The truth is that there’s a compelling case to be made for protected, structured “human only” time within a school day. Not as a rejection of technology, but as a deliberate investment in something AI cannot replicate: unmediated human connection, unstructured thinking, and the kind of slow, relational learning that is foundational to how children grow.
If anything, the more AI pervades the environment, the more intentional we need to be about preserving those spaces. And that should be something that everyone can get behind...
Reese Witherspoon and How it’s Gone Wrong
If we don’t understand the big picture related to AI, we risk having the wrong arguments and conversations. And this is something we should be far more concerned about right now.
A telling recent example: Reese Witherspoon made a public comment encouraging women to start using and understanding AI. It was, by most reasonable readings, a straightforward call for empowerment.
But because Witherspoon operates in an industry already bruised by legitimate concerns about creative ownership, digital copyright, and AI-generated content displacing human work, the response was swift and harsh. Writers and creators came for her in a big way.
It’s understandable. The concerns in that industry are real and worth serious debate. But what followed wasn’t really that debate. It was the same binary pro vs. against AI that lacks nuance and deep thinking. And it played out on social media no less—which, as we established, is itself an AI-curated environment.
The algorithm surfaces the most emotionally charged responses, amplifies the most extreme voices, drowns out the nuance—and around and around we go being controlled by the very thing that some suggest they want eliminated. 🤦♀️
We Have to Get Back on Track
This is what going off track looks like in practice. Real concerns—about labor, creative rights, the pace of change—get flattened into a slogan. Nuance gets punished out of the conversation. And the people who most need to be part of a thoughtful dialogue get drowned out by the loudest, most contentious voices.
Our recent societal, online-fueled penchant for “cancellation” is scaring off diverse voices and ironically leading us down the path to an AI future none of us wants.
We are not going to argue our way out of this moment with oversimplifications. The technology is already woven into the fabric of how we communicate, how our children learn, and how information reaches us.
The question was never really “AI or no AI.” It’s always been: who shapes it, who benefits from it, and who gets a voice in deciding?
If we don’t start having that conversation, we all lose. And this isn’t a debate between opposing groups—it never was. It’s about all of us, as humans, figuring out how we best live alongside the machines we all have spent decades building together.
That’s the conversation worth having. And we’re running out of time to have it.
Also This Week
On the Topic of AI in Schools…
I was lucky to participate in the effort to shape New York City Schools’ recently released AI guidance. It was a meaningful first step in beginning to put structures and guardrails around AI use in the nation’s largest school system—and I’m grateful for the insight it provided to me on how that dialogue is being shaped.
As part of the AI Advisory Council, I joined more than one hundred educators, administrators, academics, nonprofit leaders, and industry experts in weighing in on the document the city released.
What struck me most about the experience was the genuine good faith effort of different stakeholders to work through concern, hope, disagreement, and all the sticky issues that a fast-moving technology represents—without pretending the answers were simple. I talked a bit more about that experience on a recent podcast if you want to dig deeper into what I learned.
Let’s all get talking about this more. What are your thoughts?



