We Aren't Powerless When it Comes to AI
After two attacks against OpenAI founder Sam Altman, it's clear that a sense of powerlessness over AI is causing more than just run-of-the-mill concern. But we are in charge, here's why...
Last week, there were two separate attempts to harm OpenAI’s CEO Sam Altman and his family. First, a man armed with a lengthy “kill list” of AI employees threw a lit Molotov cocktail at the AI leader’s home. Days later, two men also shot at his house.
Altman wrote a blog post in response and it illustrates how unreasonable the current climate has become.
Incredibly, it seems that the most immediate danger with respect to AI is the narrative surrounding the innovation, not the actual technology itself. Dystopian stories, amplified by social media and stoked by mainstream coverage, are proving far more volatile than any algorithm. There’s also a particular irony to the fact that social media (a technology with a well-documented trail of real harm) is a primary accelerant here.
And yet the core idea at the center of all this fear—that someone like Sam Altman is doing this to us—is misguided. It is not the technology that will determine our future, it’s what WE do with it. The problem is that too many of us have come to believe we have no say in the matter at all. And that belief alone will affect a future world with AI.
The Power of Narratives
As humans, we reach for narratives to help us understand who we are and where we want to go. Generally, there is a balance between what we individually believe to be true and how the perspectives around us shape our thinking. But there have been times when that balance has tipped badly—say, during the Covid-19 pandemic or in the aftermath of 9/11. Destructive narratives tend to rush in precisely when we feel most confused, and they spread like a contagion. Think McCarthyism or the Salem Witch Trials.
Instinctively, we know the difference between healthy caution and quiet surrender. But those defenses weaken on unfamiliar terrain, especially when our children’s futures feel as if they hang in the balance.
In the case of AI, nearly everything that is broken about how we communicate—who gets amplified, what gets rewarded, whose voices dominate—has converged all at once. That dysfunction is the real problem. And the good news is that it is one we have far more power to address than most of us realize.

Why We’ve Gone Off Track
AI is increasingly part of every platform we use—and we’ve been marching toward this more advanced form of automation for decades. From email to movie recommendations and GPS, the foundations of AI have been part of our lives for years.
But it is precisely this pervasiveness, combined with rapid new advancements and the potential for Big Tech to profit enormously, that has pushed many of us into a passive sense of powerlessness.
Too many voices are competing to shape the narrative at a moment when only clickbait and drama seem to break through. But this noise also drowns out some of the technology’s most remarkable achievements. From AlphaFold, the AI program that predicts protein structure, which won the Nobel Prize to Google’s AI-powered flood forecasting which has expanded to cover 100 countries and 700 million people with a seven-day accuracy window—the list of accomplishments utilizing AI advancements is already breathtaking.
We’re also missing the perspectives of a more diverse set of thinkers. Mainstream coverage tends to recycle the same handful of commentators, and more measured, nuanced perspectives often go unheard. It is worth seeking those out.
I recommend academic Shannon Vallor’s book The AI Mirror as a thoughtful, pragmatic look at AI and our role in shaping it.
SheWritesAI is also a great community directory of AI-themed Substack accounts written by women from around the world. The group also recently published an anthology entitled AI Everywhere, Volume 1: How Women Are Changing The World With Artificial Intelligence.
The important point is: the more perspectives we engage with the better the chance we feel empowered and not powerless.
A Million Small Decisions
It can be far more useful to think of AI not as a force bearing down on us, but as a series of small, interlocking decisions we make every day—decisions we fully control.
At school, the question is no longer whether AI will be present, but how thoughtfully it is used. Used well, it can be a patient tutor for students, a time-saver for teachers, and a subject worth understanding in its own right.
And the counterintuitive truth is that the skills that make AI most useful (careful reasoning, clear writing, the ability to ask a good question) are also best developed away from screens.
Embracing AI in education doesn’t have to mean more technology. It can, and often should, mean better-chosen technology used more intentionally—and the confidence to put it down when it gets in the way.
At work, the more useful question is not whether AI will change your industry, but which parts of your specific role it will affect—and when. Most of us have more runway than the headlines suggest, and getting familiar with these tools firsthand is well within reach.
At home, the same logic applies. Smart speakers, recommendation algorithms, and adaptive learning apps are already woven into daily family life. The families who navigate this era best will be the ones who engage with it consciously: trying the tools their kids are using, modeling healthy skepticism, and making small, deliberate choices rather than defaulting to whatever the algorithm serves up next.
Diving into the discussion with your kids is also a great start and I’ve collated a list of conversation starters here:
An important thought to hold on to: those who are most skeptical of AI and those who are most enthusiastic about it tend to agree on far more than either side acknowledges. Both want their kids to think clearly, to be resilient, and to lead meaningful lives.
The path that runs through this is knowing ourselves well—and that has always required stepping away from the noise long enough to listen.
AI is not happening to us. We are happening to it, one small decision at a time. And by recognizing this as a society we hopefully can prevent more of those in the AI industry from having their lives put at risk.






There is something wrong with the boy's hand. Also this statement, "At school, the question is no longer whether AI will be present, but how thoughtfully it is used." undercuts your argument. If parents had more power, the presence of AI in the first place would be more of a discussion, not a given.