AI-Related News That Surprised Me This Week
Even with many hours of AI research under my belt, and a portion of each day writing about technology, there is much that still shocks me. And we should all lean into surprise. Here is why...
Even with my experience and the work I do, I’m still often floored by what I learn about AI and other technology innovation. And I bet if you allowed yourself, you would be surprised (or shocked) more often too.
The problem with an industry that we’ve allowed to become quite insular and storied is that technology literacy isn’t considered a shared core competency. So we second-guess ourselves. When we instinctually think something is strange or “off,” we wonder if we’re really in any position to question it.
But I’d like to encourage you to rethink this gut response. If for no other reason than to recognize that the dominant technology today is intended to mimic us. As I share in my book, we are both the “product” and the consumers of AI, so things are different now. We get to have a say.
So today I thought I’d share a few examples of the type of news and information that got me talking to friends and family recently. And when you read something that gets your attention, I hope you’ll similarly bring it up.
This Week’s “Did You Know?” List
The First Chatbot was Created Nearly 60 Years Ago
MIT researcher Joseph Weizenbaum created a chatbot named ELIZA in 1966. The program was engineered to replicate a conversation that a psychotherapist might have with a patient. So if you were to say “I’m unhappy with my job,” ELIZA might respond “Why do you think you might be unhappy with your job?” While Weizenbaum had initially intended to show the limitations of machine-human interaction, he inadvertently illustrated how quickly we can attribute human characteristics to a machine—and get emotionally attached.
What surprised me when learning about ELIZA was that the psychological implications of these types of interactions were very clear in 1966 (it’s even called the “ELIZA effect.” And there have been many variations of chatbots since.
So why weren’t we better prepared for the psychological impact of generative AI tools like ChatGPT on those who are vulnerable, and especially kids? Well, in part it’s because we’ve isolated technology from the social studies and the arts. While we talk today about the importance of interdisciplinary study, and of ethics and philosophy in AI development—it’s clear it always should have been that way.
The “Humanoid” Market is Expanding Rapidly
Just like with ELIZA, we need to be better prepared for the emotional and psychological consequences of new technology. And the future of robots that resemble humans is a big one. It’s not sci-fi but an exercise in emotional resilience to get our kids prepared.
First, the market is expanding rapidly with projected growth of 50% yearly over the next decade, with various estimates putting the market in the range of $38 billion by 2035.
China is also leading every country by far in humanoid development, and there are projected to be billions (yes, you read that right) operating by 2040. And again, these aren’t just “robots” but “humanoids”—machines with two arms, two legs and an effort to (in some capacity) resemble us.
Why should families pay attention? Well, it doesn’t matter how skilled or expensive these robots are over the next 10 years. What matters is that once ChatGPT and others take on a humanlike form, we may find ourselves experiencing an existential crisis. So, like with generative AI, let’s not overlook having conversations with our kids about what the future will look like and what these machines will mean to them.
Loyalty Programs are Taking Data to Train AI
It’s no surprise that loyalty programs provide a discount and incentive in exchange for data from us. But even so, I was riveted (and not in a good way) by a recent report from UC Berkeley Center for Consumer Law & Economic Justice and the Vanderbilt Policy Accelerator about how far astray these programs have gone. I recommend a full read of the report as it made me gulp. A few highlights:
Like Monopoly at McDonald’s? Well, you should definitely know the company is using the data to train its AI models. They are also building extensive profiles of customers, “…predicting their ‘preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, or aptitudes,’” the report notes.
Loyalty programs create a literal novel-length dossier on us. A report on grocery chain Kroger noted: “One profile stretched across 62 pages, with inferences about the consumer’s income, gender, household size, and education level.”
Loyalty data is shared with dozens of companies, from other brands to data brokers and more. It’s an ecosystem like nothing you can imagine and probably not equal in value to the perks received.
You are being micro-targeted with price. You might not pay the same as your neighbor does for goods or services, because companies know your income and appetite for spending.
We need to stay curious, ask questions and lean into our surprise. These are just a handful of reasons to get you thinking about what may have caught your attention this week. Let me know.
Also Happening
Book Update 🎉
Thank you for your notes of support for my new book. It’s now available pretty much everywhere books are sold. If you’d like to see it in your local bookstore, you can ask them to order a copy.
I also now have a Bookshop.org storefront, as well, where I can highlight the book and also share those books that I mentioned in AI for Families. You can visit here (*note I received a small commission from books purchased in my store).
Women in AI
A long overdue shoutout to software engineer, book author, and Substack writer, Karen Smiley who has curated a database of women who write about AI. In addition to the amazing database, Karen sends out a digest of updates that’s worth checking out. We need far more women talking about technology and Karen has done a lot to surfaces these important voices from around the world.