According to Forbes, Kenyon College has been running what they believe is the world’s first human-centered AI curriculum for ten years, led by Professor Kate Elkins. Their AI Lab facilitates collaborative research projects with global partners, testing theory of mind in AI systems and exploring AI hyper-persuasion. Student research has been downloaded over 90,000 times by institutions worldwide, with projects spanning political discourse analysis of 316,000 tweets to Supreme Court sentiment tracking. The program partners with Meta, IBM, Notre Dame Tech Ethics Lab, and OpenAI while maintaining small class sizes despite overwhelming student demand.
Why this matters
Here’s the thing: everyone’s talking about AI literacy these days, but Kenyon has been quietly building something much more substantial for a decade. While other institutions are still figuring out how to teach basic prompt engineering, Kenyon students are doing original research that actually gets used. Their 90,000+ download count isn’t just impressive—it suggests people outside academia actually find this work valuable.
What really struck me is how they’re bridging that classic theory-practice gap in humanities. For centuries, humanities students developed frameworks and ideas but rarely had ways to test them at scale. Now they’re using AI tools to analyze thousands of social media posts or legal decisions to see if their theories hold water. That’s a game-changer.
The employer angle
Look, we keep hearing that employers want liberal arts graduates, but what does that actually mean? Professor Elkins nails it: employers want the specific skills that liberal arts training develops—creativity, problem-solving, project design—but they also want tangible evidence. Nobody’s asking to read your 20-page paper on Foucault, but they will look at a portfolio where you used AI to map end-of-life narratives onto grief stages or built a swim-technique evaluation system.
And that’s where Kenyon’s approach gets really smart. They’re not just teaching students to use AI tools—they’re teaching them to think across disciplines and apply the right models to complex problems. Whether you’re analyzing character networks in Little Women or emotional arcs in Shark Tank pitches, you’re developing the kind of flexible thinking that actually matters in the real world.
Beyond basic literacy
So many AI education initiatives stop at “literacy”—teaching people what AI is and how to use it. Kenyon pushes way beyond that into what I’d call AI fluency. Students aren’t just consumers of AI technology; they’re critical evaluators and creative appliers. They’re asking questions like “Can chatbots help parents in IEP meetings?” and “How can blockchain IDs support unhoused individuals?”
Basically, they’re treating AI not as an end in itself but as a means to address human problems. That’s the kind of thinking we desperately need as AI becomes more embedded in everything. The technical implementation questions are becoming easier—the hard part is figuring out how to deploy these technologies in ways that actually help people flourish.
The scale problem
Now for the reality check: even Kenyon struggles with scaling this approach. When they briefly removed enrollment caps, the courses became some of the largest on campus. They’ve since returned to smaller, more sustainable class sizes because hands-on mentoring is essential to their model.
But that raises a bigger question: if this approach is so successful, how do we scale it without losing what makes it work? The demand is clearly there—students are apparently surprised to find AI courses that are actually interesting and let them pursue their own questions. That tells you something about what’s missing in a lot of tech education.
At the end of the day, Kenyon’s experiment proves that the humanities aren’t just surviving in the AI age—they might be more relevant than ever. The key is combining deep conceptual thinking with practical technical skills. And honestly, that’s a combination that could benefit far more than just liberal arts students.
