AI as Meaning Mediation: Early Findings from EdgeLab Research on Mathematics, Inclusion, and the AE-Engine

Illustration showing the “Student–AI–Teacher Ecology” in learning mathematics. A student asks a question about dividing by cosθ while interacting with AI. The AI provides a quick procedural solution that risks loss of meaning, while a teacher intervenes to guide understanding and restore conceptual reasoning.

Something important is beginning to emerge from our early EdgeLab work on AI in mathematics education.

Much of the current discussion around generative AI in classrooms still assumes that the key question is whether the system can produce a correct answer, a useful explanation, or a faster route to task completion. But early observations from a small research project working with secondary students suggest that this way of thinking may be too narrow. The deeper issue is not simply whether AI can answer, but whether the interaction helps a learner construct meaning.

This matters especially in mathematics, and it matters especially for inclusion.

At EdgeLab, we are increasingly interested in AI not as a neutral source of information, but as a meaning-mediating infrastructure: something that shapes what kinds of learning moves become possible, intelligible, and sustainable in the moment of interaction. That framing is also central to recent conceptual work arguing that generative AI influences education not only through outputs, but through pacing, framing, closure, and the normative “grammar” of interaction itself.

Our early findings suggest that this is not just a theoretical claim. It is observable in real classroom interaction.

From answers to mediation

A common assumption in AI and education is that learning improves when students have access to accurate explanations on demand. But that assumes that explanation is the same thing as understanding.

It is not.

A student does not simply receive meaning from an AI system. They must reconstruct that meaning from within their own prior knowledge, confidence, habits of reasoning, representational comfort, and sense of what the task is asking. If the explanation is too dense, assumes the wrong background, moves too quickly, or frames the task in a way the learner cannot work with, then the explanation may be technically correct and still fail educationally.

This is where our current project becomes interesting.

Working in a secondary mathematics context, we have been observing interactions between students, generative AI, and a teacher-researcher acting as a manual mediation layer. The present phase is not yet testing a fully deployed AE-Engine. Instead, standard ChatGPT is being used, while the teacher intervenes in the moments where meaning begins to flatten, drift, or collapse. Those interventions are being documented closely as a way of identifying what an eventual AE-informed mediation layer would need to do automatically.

In effect, the teacher is making visible the hidden ecological work of keeping meaning alive.

What the early sessions are showing

So far, two contrasting learner profiles have been especially revealing.

One is a high-attaining Year 12 Further Mathematics student with ASD. The other is a lower-attaining Year 11 student with ASD working around Grade 1–2 level. These are not simply “strong” and “weak” students in a generic sense. They expose different ways in which AI-mediated interaction can become unstable.

The Year 12 case: when AI closes too quickly

With the stronger student, the issue is rarely raw procedural ability. He is often able to solve, compare methods, scan explanations, and detect inconsistencies. The breakdowns arise elsewhere.

In one observation, the student raised a mathematically sophisticated concern: whether dividing by a trigonometric expression might lose solutions if that expression could be zero. This is exactly the kind of conceptually important question that good mathematical thinking should keep open.

But the AI did not stay with the conceptual issue. It quickly shifted the interaction toward procedural optimisation and solution production. It answered a different question from the one the student was trying to investigate.

This pattern has appeared more than once. The AI is often good at generating mathematically plausible next steps, but poor at sustaining the conceptual distinction the learner is actually trying to hold open.

The result is a subtle but important form of collapse: the student’s inquiry is redirected from What is mathematically valid here? to What is the quickest way to complete the problem?

When the teacher intervenes, what matters is not more explanation, but a different kind of intervention:

  • slowing the interaction
  • naming the missing distinction
  • reconnecting a current error to an earlier conceptual concern
  • keeping the structural question alive across multiple turns

This is not mere clarification. It is a form of temporal stitching: reconnecting moments that the AI treats as separate but that are conceptually part of the same line of thought.

The Year 11 case: when AI overwhelms before structure stabilises

With the Year 11 student, the ecology looks different.

Here the problem is not that the student is asking subtle conceptual questions that the AI prematurely closes. The problem is earlier. The learner does not yet stably perceive the structure of the expression itself.

A simple linear model such as:

C = 4 + 2n

does not automatically appear as “fixed cost plus rate times variable.” Instead, the student may see it as a string of operations, a computation to be attempted, or something to paste into the AI.

In these interactions, a very clear breakdown pattern has begun to emerge:

  1. the AI produces a dense worked explanation
  2. the student becomes overloaded or loses the thread
  3. the AI’s response is misrecognised or mistrusted
  4. the underlying concept remains unstable

In shorthand, we might describe this as a cascade from language overload to AI misrecognition to persistent conceptual gap.

Again, the most important educational work is done not by the AI but by the teacher-researcher, who slows the loop and performs very specific mediation moves:

  • asking the student to talk through what the AI has done
  • isolating one structural element, such as “2n means £2 per mile”
  • translating into an existing schema, such as BIDMAS
  • validating the student’s own viable reasoning strategy, such as incremental difference rather than symbolic rearrangement

This last point matters a great deal. The lower-attaining student often reasons successfully through pattern and increment: “27, 31, 35…” The AI frequently overwrites that pathway with algebraic manipulation. But if the teacher allows the student’s existing pathway to remain in play, meaning can stabilise.

The lesson here is not that the student lacks reasoning. It is that the AI often assumes the wrong representation for that learner.

Inclusion looks different from this perspective

These sessions are beginning to suggest a very different understanding of inclusion.

Inclusion is often discussed as if it simply means access: access to devices, access to tools, access to adaptive technology, access to support. But access alone does not ensure meaningful participation.

A learner can have full access to an AI system and still be excluded from the meaning of what it produces.

From this perspective, inclusion is not just about whether an AI system can provide support. It is about whether the support is reconstructible for the learner. That depends on much more than correctness. It depends on whether the system can interact in ways that are compatible with the learner’s current structures of understanding, preferred representations, trust level, pace of processing, and confidence.

The key question becomes:

Can this learner reconstruct the meaning of this explanation within their own way of making sense?

If not, then what looks like support may actually function as exclusion.

This is why SEND learners are such important participants in this research. They make visible problems that may be widely present but easier to ignore elsewhere. When interaction breaks down quickly, the assumptions built into AI explanation become easier to see.

AI as meaning mediation

A recent conceptual paper argues that large language models in education are better understood as meaning-mediating infrastructures than as neutral information tools. They shape conduct through pacing, reassurance, framing, tone, and closure, making some learning moves easier and others less viable.

What the present classroom observations suggest is that this argument has strong practical force.

The issue is not only what the AI says, but what the interaction makes possible. Does it invite inquiry, hold uncertainty open, help the student identify the governing distinction, and support a viable route through the task? Or does it produce an answer-shaped object that closes down the very question the learner needs help with?

This is where the AE-Engine idea becomes especially interesting.

What the AE-Engine might actually be

It is tempting to imagine an AE-Engine as a “smarter tutor” or a “better chatbot.” But the early evidence points in a different direction.

The AE-Engine, as it is beginning to take shape conceptually, is not primarily about generating better answers. It is about mediating the conditions under which meaning can stabilise.

That would involve things like:

  • recognising when a student is asking a conceptual question rather than a procedural one
  • compressing language when overload is likely
  • identifying the key distinction in a model or expression
  • prompting the learner to notice what stays the same and what changes
  • checking whether the learner’s existing reasoning pathway is valid before replacing it
  • helping the learner recognise what counts as completion
  • preserving conceptual continuity across multiple turns of interaction
  • explicitly repairing trust after AI errors

In short, it would function less like an oracle and more like a patient, context-sensitive mediation layer.

Why this matters for mathematics

Mathematics is often treated as the ideal domain for AI because there are right answers, formal methods, and clear solution paths. But that can be misleading.

The early observations suggest that mathematics is also a domain in which the difference between answer production and meaning mediation becomes starkly visible.

A student may get to the right answer and still not know why the task is complete. They may detect a valid conceptual issue and then have it dissolved by the AI’s drive toward closure. They may reason effectively through increments or examples, yet be made to feel wrong because their pathway does not look algebraic enough. They may trust the system until a single misread value destabilises the interaction.

All of this matters educationally. None of it is captured by answer accuracy alone.

A possible contribution to knowledge

What this work may be contributing is a new way of thinking about AI in mathematics education and inclusion.

Not:

  • Does AI improve outcomes?
  • Does AI explain maths better?
  • Does AI personalise learning?

But rather:

  • Under what conditions can learners reconstruct the meaning of AI explanations?
  • What kinds of interaction lead to breakdown or stability?
  • What ecological work is currently being done by teachers to keep AI-supported learning viable?
  • What kinds of mediation might an AE-informed system eventually automate or support?

This shifts the focus from performance to viability, from answer quality to interactional ecology, and from access to reconstructible meaning.

What comes next

The project is still at an early stage. But it is already producing unusually rich observations of what happens when students, teachers, and AI meet in real mathematical tasks.

The next steps are likely to involve:

  • additional observations to confirm the emerging patterns
  • tighter comparison across contrasting learner profiles
  • small experimental changes, such as giving the AI more context about a learner’s prior structures or representational preferences
  • writing up the early findings in a form that connects conceptual work on AI as meaning mediation with empirical classroom evidence

What is already clear, however, is that something important is being made visible.

The most significant educational question may not be whether AI can solve the task. It may be whether the interaction can keep the right distinction alive long enough for the learner to make meaning from it.

That is where EdgeLab’s interest lies.

Not AI as answer machine.

AI as meaning mediation.

And, perhaps, AI as a new site for thinking seriously about inclusion in mathematics.

Leave a comment