MIT PhD and EdTech Leader Jodi Davenport Talks Art, Science, and How to Think Well
I was lucky to get a chance to talk to mom, researcher, and expert on cognition Jodi Davenport about how to teach kids to think and why art will be a critical discipline in the future...
I’m excited to continue to share profiles of people who help us put AI into context as part of the AI for Families “Extraordinary Humans” special section. Look for these every few weeks…
Nothing tells us more about the future of a technology rooted in human thought and behavior than the perspectives of those at the top of their game. In this case, it’s mom, researcher, business leader, and PhD Jodi Davenport.
I met Jodi through a women’s group I joined to discuss AI. What struck me about this cognitive scientist and researcher was how nuanced and expansive her view of AI was—and how much importance she placed on cultivating a well-rounded mind.
Currently, Jodi is Vice President of Learning Sciences and Technology at WestEd and Deputy Director of the Regional Educational Laboratory, Northwest. She leads cross-agency strategy and initiatives that help educators and systems thoughtfully integrate digital tools. She also conducts large-scale research projects and advises state education leaders.
But more importantly, Jodi is a mom and advocate for the type of diverse, expansive, and very human learning that will ensure our kids can compete in the future.
On AI With Jodi Davenport
Tell me about your background…
My interest in technology started in third grade when my parents brought home a “portable” COMPAQ computer. In the back of the manual were instructions for programming in BASIC, and I started tinkering. My first program was to get the computer to play Mary Had a Little Lamb, and something about giving a machine instructions and having it do something just clicked for me.
At UCLA, I started off undeclared until I took the intro to cognitive science class. I remember discussing the nature of intelligence (a virus can trick our cells to replicate itself - is that intelligence?) and learning about how our brain uses shortcuts that are functional in navigating the world but can lead to errors (e.g. optical illusions or faulty logic). I was so excited to have found a field that connected human cognition, artificial intelligence, philosophy, and culture all at once and had the chance to work in a lab using neural nets to model how people recognize objects and reason analogically. Those early experiences led me to MIT for my PhD studying attention, memory, and perception, and a postdoc at Carnegie Mellon’s Human Computer Interaction Institute and Psychology Department where I got to apply learning science to designing instructional systems.
After my postdoc, I joined WestEd and have led large-scale research projects, directed a Department of Education-funded center on math cognition and instruction across five institutions, and supported state education leaders in evidence-based decision making and implementing best practices. When generative AI went mainstream at the end of 2023, it felt like a lot of threads I’d been following for years hit an inflection point. I currently oversee our learning science and technology portfolio and support AI implementation both internally and with our partners.
How do you feel about AI innovation entering our lives?
A friend described it as a “terrible, beautiful thing,” which aptly describes my feelings as I oscillate between extreme enthusiasm and panic.
The curiosity side of me finds it exciting. Things that used to require deep technical expertise and serious computing power are now accessible to anyone with a phone. You can build a website in minutes, a custom app in less than an hour, not to mention find a random replacement part for your dishwasher or diagnose why your plant is turning yellow. That shift has happened faster than most of us expected.
But I think about the cognitive risks a lot. Confirmation bias is already baked into how our brains work. We’re wired to seek out information that confirms what we already believe, and it’s genuinely harder to look for evidence that challenges us. A system that validates your thinking and keeps generating more “evidence” for what you already believe can quietly make that worse in ways you don’t notice.
I also worry about the illusion of understanding. There’s solid research showing that when people encounter a clear, fluent explanation, they feel like they understand something more deeply than they actually do. That feeling is not the same as learning. Real learning is biological and it takes time. The neural pathways only change with repetition and deliberate practice.
Overall, my take is that we don’t really have the option to decide whether AI will enter our lives, as it’s going to affect nearly every part of our society. We can learn how to work with the tools in ways that complement but don’t replace our thinking and strive to maintain and protect our values.
We talked a bit about “how kids learn” and what it means to “think.” What are your thoughts as an expert?
AI is different from other tools in education because it both lends itself to being used as a tool that can support (or detract from) learning, but will also fundamentally shift the kinds of work people will be doing in the future.
I don’t see the goal of education as getting knowledge into students’ heads so they can recite back that knowledge when prompted. Ideally it’s to teach students how to think. It’s to have them learn to ask the right questions, solve problems that matter to them, and be able to evaluate whether something worked.
Core competencies or “durable skills,” critical thinking, collaboration, metacognition, adaptability, have gone by different names over the years but are the underpinning of what we want our students to develop. These competencies aren’t built from just accessing information, but wrestling with ideas and working with other people.
What will be required to educate our kids so they can thrive in the future?
Students need opportunities to practice the competencies we want them to develop. That is, practice making decisions based on potentially conflicting information, practice working with different kinds of people to get things done, practice asking the right questions and questioning the answers.
If we want them to thrive in a world where they’ll be interacting with AI, we should also be educating them about how the systems work, how their own thinking works, and give them opportunities to practice working with the systems in ways that preserve their agency, values, and mental health.
What are you most excited about related to AI innovation and education? And most worried about?
What excites me most is access. If you’re curious about something, you no longer have to wait for the right teacher, the right library, or the right zip code. The ability to learn, create, and work on real problems in your own community is increasingly available to people who never had it.
What worries me is whose values end up baked into these systems. If the primary design goal is engagement or profit, the tools get optimized for that, not for human flourishing. I want a world where people feel connected and have real agency over their futures. The risk of widespread displacement without real inclusion in this new economy is something I don’t think we’re discussing enough yet.
You mentioned to me that art was even more important now. Can you elaborate?
Creating art is a core human endeavor. Art isn’t really about making pretty pictures, it’s about having a perspective. It’s about making choices that reflect something uniquely yours: how you’re interpreting the world, what patterns you’re noticing, what you’re trying to say.
In a world where AI is increasingly making the default a synthesized version of everyone else’s output, the capacity to bring something genuinely your own to the process becomes more valuable. And I think there’s something worth protecting about making things just for the joy of making them, not for a grade or an outcome.
What wisdom would you share with parents about how to think about preparing kids academically and emotionally for the future?
Model curiosity more than you talk about it. Say “I don’t know that, yet” and mean it. Let your kids see you encounter something you don’t understand and seek out more information. That lands differently than any conversation about growth mindset.
Ask more questions than you give answers. Not “AI is good” or “AI is bad” but: how would you know if this output is useful? How would you verify it? What might be missing here? Getting kids in the habit of asking those questions until they become automatic is some of the most useful preparation you can give them.
The kids who will do well in this world are the ones who feel like they have a say about their own futures, and don’t feel that things are just happening to them. That starts with the adults around them actually modeling what that looks like.





