Building on the AI Indicator: Piloting a Practical Framework for Middle Schools

by Dr Megel Barker

A few days ago, I shared my reflections in The International Educator on the growing tension around generative AI in schools. That article, Promoting Responsible Generative AI Use in Schools, was born from real conversations in my school—conversations filled with frustration, uncertainty, and good intentions. Teachers flagged student work that felt too advanced. Parents defended their children’s efforts. Students stood firm in their claim: “I only used AI a little bit.”

That article clearly struck a chord. I received messages from educators across different contexts—some eager to try the AI Indicator I introduced, others asking how to talk about AI use with students, and many simply saying, “We’re struggling with this too.” So this piece is about the next step.

From Framework to Practice: Testing the AI Guidelines Indicator

The AI Guidelines Indicator (AIGI) emerged from the simple realisation that our current academic honesty frameworks are no longer enough. Generative AI doesn’t copy—it creates. There’s no direct citation and no plagiarised paragraph to highlight. But that doesn’t mean we can’t build clarity.

In response, I developed a three-level system to guide AI use in student tasks:

● 🔴 Red – No AI Use Permitted

● 🟡 Yellow – AI Use Allowed with Citation and Explanation

● 🔵 Blue – No AI Use Restrictions

It’s simple by design—made for middle schoolers who need clear, consistent boundaries, not an abstract philosophical debate about authorship. It is also for teachers who are managing full workloads and can’t afford the ambiguity of AI-related “he said, she said” scenarios. And it is for parents, many of whom are navigating AI for the first time themselves, just trying to help their children succeed.

So Why a Pilot?

Since publishing the article, one question has come up repeatedly:

“How do we know this will actually work?”

That’s exactly what I want to find out.

We’re now moving from theory to action through a six-week pilot program involving students, teachers, and parents. The goal? To test how well the AIGI:

● Clarifies expectations around AI use

● Reduces classroom conflict over AI-generated work

● Encourages students to reflect on and document their AI interactions

● Helps teachers design tasks that promote both creativity and accountability

The pilot includes teacher implementation guides, student instruction sessions, and a parent engagement component. We will gather feedback through pre-and post-surveys, focus groups, and analysis of student-submitted work. But here’s the thing: I don’t want to do this alone.

A Call to Collaborate: Join the Pilot

If you are a school leader, curriculum coordinator, or teacher curious about how to guide responsible AI use in your own school, I am inviting you to join this pilot.

Participating schools will receive a starter toolkit, including:

● A digital version of the AIGI graphic

● Teacher instructions for tagging tasks with Red, Yellow, or Blue indicators

● Sample student and parent communications

● A framework for gathering feedback and reflections

This is not about perfection—it’s about progress. We are still in the early days of AI in schools, and we need collaborative, adaptive thinking to guide us forward. If a handful of schools can test this model and share insights, we can evolve AIGI into something that works not just in theory, but in real classrooms with real students and families.

Why Middle School?

Some have asked: Why focus on middle school? Aren’t high school students the ones producing AI-written essays and applying to college?

Yes—and that’s precisely why middle school is the place to start.

This is where habits are formed, where students learn what academic integrity really means. It’s the moment before performance pressure peaks, before grades carry transcripts, and when students are still open to guided reflection. AIGI is not about punishment; it’s about teaching discernment early so that students are equipped to make ethical decisions later. If we introduce thoughtful AI policies in middle school, we can shift the culture before AI dependency becomes a problem we’re trying to reverse.

Final Thoughts: It’s Time to Lead the Conversation

AI in education isn’t an “if,” it’s a “how.” How do we help students use these tools meaningfully, without outsourcing their thinking? How do we protect the integrity of learning while embracing innovation?

AIGI isn’t a perfect framework. But it is a place to start—a way to move from confusion to clarity, from suspicion to shared understanding.

If you’re reading this and thinking, “We need something like this in my school,” you’re not alone. Let’s test this together. Let’s gather our reflections. Let’s shape what responsible AI use looks like—before someone else defines it for us.

And if you are interested in piloting the AI Guidelines Indicator, collaborating on the research, or just want to start a conversation, I would love to connect. Reach me at

embark30@gmail.com or send a message on LinkedIn.

This article, authored by Dr Megel R. Barker, Head of Middle School, TASIS – The American International School England, was developed with the support of AI to refine the structure and enhance clarity, demonstrating the very principles of responsible AI use that it advocates.

LYIS is proud to partner with WildChina Education

Leave a Comment

Your email address will not be published. Required fields are marked *