The Bioethics of the Classroom: Teaching AI Ethics in K-12

Episode 33 of Kinwise Conversations · Hit play or read the transcript

Episode Summary: The Missing Infrastructure in K-12 AI Adoption

Most schools are asking the wrong question about artificial intelligence. The conversation typically starts with “Does this tool work?”, but philosopher, educator, and author Priten Soundar-Shah argues that question belongs at the end of the process, not the beginning. In this episode of Kinwise Conversations in AI, host Lydia sits down with Priten to examine the foundational infrastructure that K-12 institutions are missing: a shared language and practice for ethical reasoning.

Drawing on his background in philosophy at Harvard, his experience helping over 200 institutions navigate the pandemic transition to online learning, and his forthcoming book Ethical Ed Tech (Wiley, 2026), Priten makes a compelling case that the frameworks medicine uses, bioethics, can and should be adapted for education. The result is a conversation that is both philosophically grounded and immediately practical, offering district leaders, principals, and classroom teachers a new lens for every AI decision they face.

Key Takeaways for Superintendents, K-12 Leaders & Mission-Driven Executives

  • Ethics is a skill, not a policy. Schools that treat AI governance as a compliance checkbox will always be reactive. Priten argues that ethical decision-making is a learnable practice, and one that requires vocabulary, heuristics, and protected time to develop.

  • There are two steps before ‘What is ethical AI?’ First, build general ethical reasoning capacity. Second, apply it to the purpose of education. Only then can a school meaningfully answer the question of what ethical AI looks like in its specific context.

  • Classroom teachers are the most consequential AI decision-makers in any school. Like doctors who know their patients, teachers possess both pedagogical expertise and relational knowledge that cannot be replicated at the district level. Empowering them with ethical frameworks, not just policies, is a strategic imperative.

  • Top-down AI policies produce compliance, not culture. When educators lack buy-in and shared vocabulary, the result is checkbox behavior and minimal-effort adherence. Genuine institutional change requires that teachers be consulted before decisions are made, not informed after.

  • Slow down to lead. In a moment of vendor-driven urgency, the most strategic move a district can make is to define its own problems before entering any sales conversation. Asking ‘What is our highest-priority challenge?’ before evaluating any tool is the foundational leadership practice Priten recommends.


From High School Nonprofit to Ethical AI: Priten’s Origin Story

Lydia: Priten, I’m so excited to have you on the show. You have a book coming out soon, Ethical EdTech. You have another book called AI in the Future of Education, and I’m so curious about your journey and how you ended up in this ethical ed tech space.

Priten Shah:

Thank you for having me on the show. I started my first ed tech nonprofit in high school. I was trying to explore how we can use technology to scale tutoring. And I really have not left the space since.

That’s meant a lot of different projects, lots of failed startups, lots of temporary work with different institutions. But the two most pivotal moments for my journey end up being my undergraduate education in philosophy, which shapes my thinking about the larger purpose of education in a democratic society and the pandemic.

At that point I had started building out my company and been working with some schools and nonprofit institutions. But the pandemic was a moment where the tech and education really came together. Over those two years, we ended up working with over 200 institutions to help them transition to online learning or online extracurriculars. That was a great moment for us to work with the education community in a moment of crisis, collaboratively.

And then those same people came back when AI arrived and said, ‘What do you have to help us navigate this new crisis?’ So I integrated all of that together to figure out how we think about AI and education this time around.

“The two most pivotal moments for my journey: my undergraduate education in philosophy, and the pandemic.”

Why Values and Ethics Are the Foundation

Lydia: Philosophy and ethics feel really core to who you are and what you do. How did you realize that values and ethics were going to be so crucial to EdTech in general, and to AI when that came on the scene?

Priten Shah: With EdTech in general, I was learning philosophy and thinking about the larger purpose of education in a democratic society, and it changed the mindset with which I approached what EdTech can do. It became: here are some goals that would be nice to achieve in order to further our democratic ambitions. How can technology help us get there?

With AI, I’ll be honest, it was a jagged journey. When the technology first came out, I was very optimistic. I had played around with large language models in college to work on Sanskrit. It’s a very regimented language, and folks were trying to map it into AI algorithms to do some decoding. A few years later, a lot of things I had been thinking would be possible in ten years suddenly became possible overnight. That was very exciting as a developer and as a technologist.

What we realized was that’s not where educators were. Educators were really scared. Assessments became a lot more challenging. So a lot of my work shifted into building tech literacy so that teachers could feel comfortable with the technology.

Quick background: the pandemic taught me how much of a tech gap exists in education. A lot of our work was getting teachers to mute and unmute a Zoom call. Two years later, there’s AI. Going from ‘here’s how you unmute’ to ‘here’s how you think about AI’ was a huge gap in tech literacy for educators, and still is.

I start seeing headlines of integrations that are a little bit shortsighted. A lot of techno-optimism settling in. We went from being really scared and hesitant to feeling a level of FOMO and moving at a pace that I think was faster than we were prepared to navigate. That’s where this book comes in: the first book talked about all the possibilities; this one is about how we choose the right ones.

The Ethical Framework: Borrowing Bioethics for the Classroom

Why Education Lacks a Culture of Systematic Ethical Thinking

Lydia: How do you help people navigate that?

Priten Shah: One of the biggest challenges is we don’t really have a culture of systematic ethical thinking in education. Meira Levinson, a scholar who has spent a lot of her career thinking about this, makes the larger argument that a lot of other important industries have a very regimented culture of ethics. Medicine is the most obvious example: bioethics is an entire field that defines how practitioners ought to act. We don’t really have an equivalent practice in education.

So a lot of it is: how do we teach folks to think ethically, generally? How do we build the right vocabulary, the right heuristics? Then how do we build that into education? And then, only then, get to the question of what is ethical AI in education? There are two preliminary steps we need to deal with before we can answer that last question. When we rush to it, it ends up really still being about the tech and the tech literacy, and the ‘should’ question is just not being centered.

The Five Principles: Adapted from Bioethics for K-12

Priten Shah: The book takes the approach of building some basic vocabulary about ethics, and I borrow from bioethics to provide that framework. One strand of bioethics talks about four principles that medical practitioners can use: make sure everything you’re doing is doing something good for your patient; do no harm and avoid unnecessary harms; respect the autonomy of your patients; and make sure the decisions you’re making allocate benefits and risks equally among the whole community.

All of those work really well for education too. But the final component I argue is necessary, and this is unique to education, is thinking about Care. The strength of our relationships actually changes how much of the goods we want in education are possible.

The easy way to think about this: if you have a surgeon you really hate and they’re going to perform an appendectomy, you’ll still go home healthy even though you really hate your surgeon. Your relationship with the surgeon doesn’t affect your ability to go home healthy. But if you really hate your history teacher, it can have a huge impact on not only what you learned that year, but how you think about history long-term. The relationship does matter a lot more in education. And so when we think about ethical decision-making, we ought to think about what this is doing for our relationships as well.

“Policy is just compliance if you don’t have the why behind it from people on the ground.”

The Book’s Three-Part Structure

Lydia: Can you talk about your book and how this framework shows up?

Priten Shah: The book does fill a gap I see. It builds some basic vocabulary about ethics, then figures out: what are the specific concerns that technology brings up? Environmental concerns, the black-box effect, bias, data privacy … those are really a second step. The first step is how do I think about ethics as a whole?

I have to flag that I might be disappointing to some audiences because I don’t give universal answers. I don’t think there are universal answers. Every single school in the country ought not to think about ethical ed tech in the exact same way. The answer is very different based on your individual context, what resources your school has, what your community is.

Very similar to medicine, ethics in medicine isn’t about saying every single case of organ donation ought to be dealt with in this way. The goal is providing practitioners with the resources to have those conversations in their community. I’m trying to do the same thing for education.

The second part of the book talks about how we build that into policies and systems within schools. The last part is putting it into practice: now that you have the vocabulary and the systems, what does this look like in real life? I built 12 case studies based on interviews and real decisions that actual educators have had to face about ethical educational technology, and scaffold it so the reader can use the resources from the first two sections to navigate what they would do in that situation.

Strategic Insight: Why Classroom Teachers Must Lead AI Decision-Making

The Case for Teacher-Centered Policy

Lydia: Who is the ideal reader for your book?

Priten Shah: The ideal reader is the average educator, classroom teachers specifically. That’s counterintuitive for some folks. When people think about ethical decision-making in education, they often think about policymakers, school building admin, directors of technology. Those are a secondary audience. But my ideal world is that much more of the decision-making happens at the individual classroom level, and that requires teachers to have literacy in both ethics and technology.

Lydia: Why is that important? When we think about decision-makers, a lot of times people think about principals or district leaders.

Priten Shah: A few reasons. First, there are a lot of micro-decisions that end up having a huge impact on students that are within the control of individual educators. They get the most face time with students. Even if a district buys a tool or implements a policy, teachers are often the ones enforcing it. They’re the ones who decide whether a tool the school bought will be used once a year or every day.

The second piece is about contextual knowledge. The reason we trust the doctor to make decisions and not the hospital CEO is because the doctor has both subject matter expertise and knows the patient best. Similarly in education, our teachers are the ones who have spent the most time building up their skillset in pedagogy and curriculum, and who know the exact moment, the exact scenario, who is in their classroom.

The third piece: I’m hoping educators are listened to even more as we figure out what the next stage of education looks like. Oftentimes those decisions are made top down because of efficiency reasons and tradition. But if educators were consulted more, there’d be a lot more buy-in on the policies. And in general, I think we’d be making better decisions.

Compliance vs. Culture: The Buy-In Problem

Lydia: Policy is just compliance if you don’t have buy-in and the why behind that from people on the ground.

Priten Shah: Exactly. You end up with a lot of compliance behavior, just checking the box, asking what’s the bare minimum to adhere to the policy. That’s not really creating a culture of effective practice. It’s not creating a culture of ethics or active decision-making by our educators. And we see that with most decisions in education that don’t have buy-in from educators: they don’t really end up doing what we want them to do. The benefits don’t accrue.

From Theory to Practice: Two Case Studies in Ethical AI Adoption

A Positive Example: AI-Powered Absenteeism Outreach

Priten Shah: One of the most interesting positive use cases I saw recently was at the school level, around chronic absenteeism. One school district implemented a tool that, when a student is marked absent, will auto-text or email the parent and ask what’s going on. The AI logs it into the system, medical appointment, no response, sick, and flags for administrators which students need follow-up.

That’s a really cool instance of scaling something that really isn’t feasible in real life. We could not possibly be reaching out to every student’s parent when they’re absent at large scale. It makes the human part of this more effective because now it’s about triaging and using human resources in the most important ways, where the need is greatest. It allows us to reach out to our more vulnerable students. Running it through the five principles quickly: does it do something good? Yes, it helps combat absenteeism. Does it do no harm? The biggest concern is data privacy; if the school vetted that properly, it can be adhered to. Justice? You’re accruing benefits for those who need it most without causing unfair advantage. Autonomy? Did you get parental consent to this kind of outreach? And Care? It’s very clearly only strengthening relationships with students.

A Cautionary Tale: Online Standardized Testing

Priten Shah: Online standardized testing seems like a great use of technology on face value. Accessibility goes up. Rural communities don’t have to send students to a proctored center. Tests can be screen-reader compatible, dyslexic-font compatible. There are a lot of things a digital test allows you to do that you can’t do with pen and paper.

When you think about the long-term implications, though: if the test at the end of the year is online, so much of our classroom behavior is test prep. Does that mean we’re spending a lot more time on screen all year long because that’s the test condition? What does that do for our relationships with our students? What does that do for their social-emotional development?

That’s where you start to think about possible harms and then ask whether there’s a way to accrue some of the benefits. Could we only offer the digital test for students who need accessibility accommodations and not make it universal? Once you start thinking about these ethical concerns, you can also come up with a middle ground. That’s the kind of nuanced thinking we can all do when we slow down and learn how to ask the right questions.

“The first question shouldn’t be ‘Does this tool work?’ It should be ‘What problem are we actually trying to solve?’”

Student Voice in AI Governance: A Developmentally Scaffolded Approach

Lydia: What role should students play in the decisions teachers and districts make about AI?

Priten Shah: Students are complicated for philosophers for multiple reasons. Developmentally, they’re not necessarily thinking about long-term implications. But I think we can phase in student voice a lot earlier than we often do. Oftentimes it never happens, even through high school. There’s sometimes a student representative. All these little things we do to pretend we’re listening, but they really don’t have any substantive power.

By the time they’re in high school, they have a good sense of what decisions are being made and what the possible implications are. Talk to them, and you’ll have a productive conversation.

And there are ways to do this even at the kindergarten level. For a kindergartner, you might present two choices you’ve already vetted: ‘Do you want to read the book on the iPad or the physical book?’ Little things that let the student have a voice in their education that don’t require a student board scenario. As they get older, they should get more of a say, not only in their own individual decisions, but how the school itself is making decisions.

The second complication is that we not only have an obligation to our students now, we also have an obligation to who they will be in the future. Education isn’t shortsighted. That requires the insight of adults who have formed wisdom and can think about what values ought to underpin the long game. But right now we’re so far from giving students any meaningful say that if the heuristic is ‘let’s incorporate student voices more,’ I think we’ll be okay and safe.

Building Ethical Infrastructure Into District AI Policy

Lydia: Right now districts are rushing to publish AI policies and have some stance. How should these ideas around ethical infrastructure show up in a policy, ideally?

Priten Shah: We’re not going to magically overnight build all the perfect ethical infrastructure. But if we start integrating it in individual processes, it’s a lot easier than a big-scale overhaul.

With district policies, we’re seeing a lot of top-down decisions and a lack of transparency. There was a case in California recently where district admin signed a contract with OpenAI and then redacted it from the board and from parents and the community. That’s the perfect example of non-ethical process. Everybody’s intentions might have been great. I’m not questioning anyone’s values, but there clearly wasn’t a process for ensuring transparency of decision-making. Those are the kinds of things we can start repairing quickly without overhauling things dramatically.

The other big part is how much we’re consulting teachers and students when making these decisions. A lot of times even when we see a student task force or an educator reading group, it’s lip service. Everybody’s already made up their mind before the task force even meets. We can start incorporating more voices immediately, tomorrow, to make our decision-making a little more ethical, without requiring a substantial amount of training for everybody. That alone is a good, easy baby step toward better policies.

The Strategic Stakes: What Hope and Fear Look Like for K-12 Leaders

Lydia: What gives you the most hope about AI in education right now, and what gives you the most concern?

Priten Shah: I think they’re two sides of the same coin. I am hopeful that if we build a community practice. If we talk to each other, if we really think about what our basic values are as educators, and then approach how we shape education for the next era. I think we can make it a lot more human-centered. I think we can solve some of the biggest problems we have. I think we might move away from standardized testing. I think we might move away from strict grading as the only incentive structure. I think we might be able to make our classrooms more human in general. That would be the ideal, hopeful response to AI.

My concern is we do the exact opposite. That we turn to more in-class standardized testing because we need to assess students in ways that feel ‘AI-proof.’ That we let the tech industry control the idea of what education needs to look like. That we start teaching prompt engineering in kindergarten when no one is going to be prompt-engineering by the time those kindergartners enter the workforce. It’s such a shortsighted approach we’re being sold on right now. I really hope we don’t buy that. Instead, use this moment to ask: what is education’s role long-term, not just what are we being told is the role for this next two weeks until the next tech update?

“Use this moment to ask: what is education’s role long-term. Not just what are we being told is the role for the next two weeks until the next tech update?”


Guest Bio

Priten Soundar-Shah is an educator, philosopher, and entrepreneur working at the intersection of humanistic values and frontier technology. He is CEO of PedagogyVentures and Executive Director of PedagogyFutures, a nonprofit focused on ethical ed tech. A Harvard philosophy graduate and Harvard GSE alumnus, he serves as an Associate in Harvard’s Department of Philosophy. He is the author of AI & The Future of Education (Wiley) and the forthcoming Ethical Ed Tech (Wiley, 2026).

Connect with Priten Shah

  • Website & Book Pre-Order: https://www.ethicaledtech.org

  • LinkedIn: linkedin.com/in/pritensoundarshah

  • Newsletter (Substack): https://read.priten.org/

  • Podcast: Margin of Thought with Priten: Available on all major podcast platforms


Resources Mentioned & Related Concepts

  • Ethical Ed Tech: How Educators Can Lead on AI & Digital Safety in K-12 (Wiley, May 2026): Priten’s forthcoming book; the primary resource of this episode.

  • AI & The Future of Education: Teaching in the Age of Artificial Intelligence (Wiley): Priten’s first book, available now in print, audio, and e-book.

  • Principlism (Bioethics Framework): The four-principle framework from Beauchamp & Childress (Beneficence, Non-maleficence, Autonomy, Justice), adapted by Priten with a fifth principle, Care, for education.

  • Meira Levinson: Scholar referenced by Priten for her work on systematic ethical thinking in education. Affiliated with Harvard Graduate School of Education.

  • PedagogyFutures (nonprofit): https://pedagogyfutures.org : Professional development resources for responsible, ethical, human-centered educational technology.

Next
Next

Place-Based AI: Grounding Technology in the Real World