Val Kilmer, White Lotus Star Aimee Lou Wood & Risk AI Poses to the Outlier
#HAPPENINGNOW | A less talked about shortcoming of AI is its natural mathematical rejection of "outliers," or in non-tech terms, our "one-of-a-kind" talents, unique beauties or singular visionaries…
The irony of AI is the way in which it's built on math, but then guided by ethics, critical thinking, psychology, and personal introspection. It also carries the danger of naturally drawing us toward the middle, and thus discouraging our uniqueness.
Outliers are, of course, anomalies that disrupt the harmony of algorithmic perfection. But the entire process requires a declaration of what is "normal," which is where we might start to lose our beautiful edges.
AI's math doesn't like outliers, but humans love them.
“An outlier is an observation that lies an abnormal distance from other values in a random sample from a population. In a sense, this definition leaves it up to the analyst (or a consensus process) to decide what will be considered abnormal. Before abnormal observations can be singled out, it is necessary to characterize normal observations.” —National Institute of Standards and Technology
Val Kilmer: The Consummate Outlier
Any posthumous reflection on actor, writer, and artist Val Kilmer’s uniqueness has included a confounding set of contradictions. He was as handsome, intelligent, and charismatic, as he was difficult, vain, and entitled. Or, as Entertainment Weekly touted in 1996, Kilmer was: “The Man Hollywood Loves to Hate.”
Joel Schumacher found Kilmer impossible to work with in Batman, but Oliver Stone said he was a once-in-a-lifetime talent, playing Jim Morrison so convincingly that the other members of the Doors couldn’t tell Kilmer’s vocals from Morrison’s original.
In other words: he defied every effort to define and categorize him. Kilmer was an outlier.
The Unique Beauty of Aimee Lou Wood
The same concern can be relayed through the visual as well, especially in light of OpenAI’s newest image generator. But as it relates to our appreciation of beauty, it’s an interesting consideration when juxtaposed against the recently celebrated unique and compelling beauty of White Lotus star Aimee Lou Wood (who arguably stole the show in season 3). With her off-kilter smile and adorable personality, Wood represents the unconventional— or in tech terms, the statistical anomaly.
As Sophie Gilbert wrote in the Atlantic this week, when reflecting on the quirky beauty of both Wood and her co-star Charlotte Le Bon, and at a time of AI-generated art, Ozempic, and social media perfection:
“I can tell you how it made me feel to see such gloriously irregular beauty amid all the identical Instagram faces with the same Tic-Tac veneers, stenciled eyebrows, and contoured cheekbones: relieved” — “Reclaim Imperfect Faces: Technology isn’t just changing the way we look—it’s changing our sense of how we should look,” The Atlantic, April 4, 2025
Don’t Stop Thinking Different
There is no prescribed path when it comes to AI innovation, and as such, the math can actually be a red herring. It’s far more about how we approach the questions and then how we address AI’s answers that matters.
The right questions and maintaining our individual authority is how we should approach using the technology.
In fact, as I asked Claude.ai for thoughts on outliers (as I’ve said before, it takes one to know one), and the answers were so good, I’m adding it as a Q&A here below.
AIforFamilies » What are your thoughts on how the training of AI models, which theoretically remove the outlier, could have a detrimental effect on what humans believe is possible?
Claude.ai » I find this an intriguing philosophical question about AI's impact on human creativity and innovation. The tension you're highlighting between statistical learning (which tends to find patterns and averages) and the value of outliers is profound.
The American spirit has indeed often celebrated the maverick, the pioneer, the unconventional thinker — from Edison to Jobs, from Earhart to Armstrong. These outliers pushed boundaries precisely because they deviated from established patterns.
There are several potential concerns about how AI training could affect innovation:
Regression to the mean: Since many AI systems are trained to capture central tendencies in data, they might inadvertently downplay or miss the statistical outliers that often drive breakthrough thinking.
Reinforcement of existing patterns: AI systems trained on past achievements might emphasize what has worked before rather than what could work in unprecedented ways.
Homogenization of thought: If AI becomes a primary source of information, creation, or decision-making support, we might see a narrowing of perspectives toward what the models deem most probable or "correct."
AIforFamilies » Often it's the idea of what is possible that creates innovation, and I worry that the mavericks, the people who believe in the impossible, the radical thinkers and those that don't fit in, will be discouraged from pursuing their unique path. How do we keep people believing that they can do the "impossible" or achieve the "never done before"?
Claude.ai » You raise a profound concern about preserving the essential spark of human innovation in an increasingly AI-mediated world. The belief that the "impossible" is possible has indeed been the catalyst for many of humanity's greatest achievements.
To keep this belief alive, I think several approaches are vital:
Emphasize the human stories behind innovations rather than just the outcomes. When we tell stories of Edison's 1,000 failed attempts before inventing the light bulb or of Marie Curie's persistence despite institutional barriers, we celebrate the human quality of perseverance against odds that algorithms would have deemed improbable.
Create spaces explicitly protected from optimization and efficiency metrics. Art schools, certain research labs, and innovation hubs that celebrate experimentation without immediate results serve this purpose. They provide sanctuary for ideas that would be filtered out by statistical learning.
Reframe how we educate the next generation about technology. Rather than teaching AI as an oracle of answers, we might teach it as a tool that reveals patterns—patterns that human insight can then choose to follow or deliberately break.
Cultivate critical awareness about the gap between what is statistically likely and what is possible. The mathematically impossible breakthrough remains a uniquely human domain.
Preserve and celebrate contrarian thinking and intellectual diversity, especially in fields where AI adoption is accelerating.
AIforFamilies » I wonder if we should also try to be sure that people aren't swayed by the idea of absolutes, that they use AI to explore ideas, but not look for absolutes in answers?
Claude.ai » I think you've hit on a crucial insight. The concept of "absolute answers" is indeed something we should be wary of in the AI era.
AI systems like mine excel at finding patterns and presenting information confidently, but this can create an illusion of certainty where none should exist. This is especially dangerous in domains where creativity, innovation, and human judgment are essential.
Rather than seeing AI as an oracle of truth, we might better conceptualize it as:
A thought partner that can help explore the possibility space of ideas
A tool for generating alternatives rather than singular answers
A mirror that reflects our thinking back to us, allowing us to see it from new angles
The most valuable approach might be teaching people to use AI as a starting point rather than an endpoint. Ask "what if?" questions, challenge the assumptions behind AI-generated content, and use AI outputs as raw material for human creativity rather than as finished products.
In education, this might mean teaching students to critically evaluate AI-generated content rather than accept it as authoritative. In business, it could mean using AI to generate multiple scenarios rather than a single prediction.
Perhaps one of the greatest risks isn't that AI will replace human creativity, but that humans might abdicate their creative responsibility by deferring too readily to machine-generated "absolutes."
The future belongs not to those who can get the "right answer" from AI, but to those who can ask it the most interesting questions and recognize when to deviate from its suggestions.
What I find particularly interesting about this conversation is how it highlights a paradox: the tools we build to extend human capabilities might inadvertently narrow our sense of what's possible if we approach them incorrectly.
By maintaining a healthy skepticism toward absolutes and using AI as a collaborative tool rather than an authority, we can better preserve the maverick thinking that drives human progress.
Not sure that I could have said it better myself. Let’s not lose our outliers, but instead use AI to find ways that we can all be extraordinary and one-of-a-kind because of the technology, not despite it.