A Sage, Three Professors, and Alice in Algorithmia
#NEWSLETTER | A glimpse this week into how academia is tackling AI development and its sticky moral and societal issues, as well as, an update on fake video "AI slop" vs. cute (& very real) elephants…
Last week, I attended a roundtable and reception at NYU’s Abu Dhabi campus in New York. Academia has been wrestling with the complicated, and often existential, questions regarding AI for some time, but this gathering was particularly enlightening for the diversity of issues covered.
I wanted to share some of what I heard because it was thought-provoking, but also because it reinforced points I have made recently about how AI can, and should, change how we think about higher education.
College Admissions in the Age of AI
Over the past few weeks, I've attended several events bringing together academics, industry leaders, advocates, and students to discuss a range of issues related to how we build AI responsibly.
A Sage: The Cross-Disciplinary Example
Harold Sjursen, Professor Emeritus, NYU Tandon School of Engineering, moderated the event I attended. You wouldn't need to go farther than a cursory review of the professor's incredible background to understand how cross-disciplinary work has flourished in academia.
Professor Sjursen is a professor of philosophy and technology. With academic training in philosophy, and a lifelong interest in engineering, he's managed to fuse the two over his 40+ year teaching career.
And he sees how the professional world is changing too:
“Scientists and engineers are collaborating, philosophers are working together with social scientists, lawyers, physicians and of course colleagues in other disciplines of the arts and sciences. The fine arts as well are increasingly collaborative and even interactive.”
The Three Professors
Each of the presenters at the roundtable tapped into something that aptly connected to much of what families have been talking about, including the anthropomorphism of bots and their future actual potential for a version of consciousness; how far we are in creating humanoids/robots; and our need to exert agency over all of it.
Professor Jeff Sebo: Morality, Machines and Animals
Associate Professor of Environmental Studies, Director of the Center for Environmental and Animal Protection, Director of the Center for Mind, Ethics, and Policy, NYU
What increasingly helps us understand how to think about AI development, and how to teach our kids, is the use of an analogy. And in this case, Professor Sebo's work with animals is on point. Because whether AI will ever have a version of "consciousness" that comes close to our own (something the professor is exploring in his work), even today, we are forced to consider how we "act" with AI bots and agents that "seem" to think.
The analogy, of course here, is in our animal and insect world. Professor Sebo considers how our understanding about consciousness in other living things can and should inform our thoughts about future sentient AI.
More urgently for us as parents is also the question of whether, in the absence of AI "consciousness," how our personal interactions with AI chatbots and other increasingly sophisticated agents will affect our own future moral well-being.
We don't want our children acting cruelly to an AI because it "can't feel" but at the same time we don't want kids to defer to a technology that is only mimicking human qualities.
Arguably this will be one of the most challenging issues in the years ahead that parents face, and it's why critical thinking, independence, confidence, and morality are all important considerations when educating kids.
Professor Sebo's new book is called The Moral Circle.
Professor Ludovic Righetti: Robots, What’s Real and How we Value Tech
Ludovic Righetti, NYU Associate Professor, Electrical and Computer Engineer; Director of Machines in Motion Laboratory, Autonomous Machines in Motion
Robots are exciting, but what can they really do? And are we investing in technology properly?
We are without-a-doubt heading toward a future where robots will be available to undertake jobs that are dangerous to humans, or even simply onerous tasks that take their physical toll, like cleaning.
But right now, imagination, perception, and reality need to be parsed. To illustrate the current perceptual gap, the professor showed widely shared videos of robots that were not real (while I can’t recall if this is the one shared, Kawasaki’s recent video has been making the rounds online).
The question the professor also posed was related to what we, as a society, regard as a valuable investment. While billions have been invested in self-driving cars, for instance, should we have looked at improving trains instead?
“Technology is not value free,” professor Righetti said.
Professor Julia Stoyanovich: Alice in Algorithmia
Computer Scientist Professor, NYU, Director of Center for Responsible AI, AI Governance
Professor Stoyanovich most directly echoed the message I feel frequently called to share: we should have a say in all of it.
If we don’t exert personal agency over technology, we are not only missing its potential, but fueling our worst nightmare about how it all could go wrong.
“Responsible AI” is also an unfortunate misnomer, said the professor, as everything created should be “responsible” in scope.
The professor then shared the brilliant clip above called “Alice in Algorithmia.” The point, of course, is that we must use AI and related new technology to own our lives, not allow it to own us.
In Connected News
Often after covering an issue, news continues to add weight to the subject. So I’m going to start collecting examples that reinforce issues (or opportunities) that I’ve written about previously. For today, it’s “AI slop”…
Surrealism or Spam? How "AI Slop" is Overwhelming the Internet
Have you, or your kids, been scrolling through social media lately only to come across horrifying, can’t-look-away, AI-generated video or image content that seems to be overtaking our accounts?
If you read my previous post on the subject, you’ll know this type of AI-generated content is highly clickable, sometimes grotesque, and definitely weird. It’s also clogging up social media intended to both generate revenue, as well as often, to destabilize social media algorithms.
This week, 404 Media (my source for the original slop story) shares how natural disasters and other tales of sadness and misery are being exploited by slop creators. It’s worth a read to see how this may impact our kids, and important to help them understand that these images are not real.
We were also provided with a fascinating juxtaposition yesterday, as video of elephants circling their young surfaced during the earthquake in San Diego. It’s a great opportunity to talk to kids (and amongst one another) about how to evaluate real phenomena captured by video versus content that is AI-generated.
Any thoughts or questions about this week’s newsletter? Let me know!