Malachi Gee discusses the significance of AI for disabled students and how its increasing stigmatisation is harmful for those who rely on assistive technology for accessibility in educational settings.

Assistive, not generative: how AI has long helped disabled students, and why renaming it makes life harder

A computer screen with a pink bubble in the centre which reads 'ChatGPT'

The question that keeps turning up in university discussions is deceptively simple: how can AI help with accessibility? It should be the easiest part of the debate. Instead, it has become the most confused, because most conversations about AI start at the wrong end. They begin with cheating, automation, and the fear that students are outsourcing their degrees to a polite rectangle that can write smoother prose than any sleep-deprived human ever will.

Those fears are not imaginary. But they have swallowed the real story. Disabled students have been using “AI” for years. We just did not call it AI. We called it accessibility software. And the moment the label changed, the ground shifted under our feet.

To be upfront, I used assistive AI to write this article. That is not a confession. It is the point. For me, and for many disabled students, these tools are not a novelty or a shortcut. They are the practical infrastructure that makes academic work possible. The thinking, the judgement, the argument, and the voice are mine. What the technology changes is the route from mind to page. That distinction, between assistance and substitution, is exactly where the current debate keeps losing the plot.

I draw a hard line here. I use assistive tools and am transparent about it because they preserve authorship while lowering barriers. What I will not do is outsource thinking, interpretation, or argument to a generative model. Universities assess judgment, not just fluency. If a tool starts supplying the substance of the work rather than helping me express it, it stops being accessibility support and starts being a substitution.

I realised this properly in a tutorial with Dr Ben Roberts last term. I was talking through a paragraph on Late Bronze Age political economies, and Dragon NaturallySpeaking decided that the phrase “Silk Roads” was actually “Szechuan.” Ben stopped me, frowned, and asked me to repeat the sentence. I did. He looked confused. I looked confused. For a brief, cursed moment, we both wondered whether I had spontaneously redirected Afro-Eurasian exchange into regional Chinese cuisine. Then we checked the transcript and discovered that Dragon had simply panicked in the face of specialist vocabulary and improvised like a drunk scribe.

That is the part of the AI story that rarely gets airtime. Assistive tools do not magically understand your degree. They do not know the difference between Kassite and Kaska, or why mishearing a polity can derail an entire argument. Unless you feed these systems a steady diet of your own terminology, they struggle. They flatten nuance. They guess. They hallucinate extinct states into existence. And if you rely on them for access, you become the person who has to catch those mistakes before they make you look like you rewrote Near Eastern chronology out of boredom.

Most conversations about AI start at the wrong end. They begin with cheating, automation, and the fear that students are outsourcing their degrees

But those failures are not an argument against assistive tools. They are the point. These systems are scaffolding, not authors. They help you do the work, yet they still require you to do the work. Most of what disabled students use is not content generation. It is translation, navigation, and cognitive load management: getting words onto the page, getting the page into your head, and keeping up with a system that was never designed for brains like yours.

Dragon (speech-to-text) is the obvious example. You talk and it types. That is the whole trick. If you have dyslexia, dyspraxia, ADHD, chronic fatigue, joint pain, or anything that makes typing slow, painful, unreliable, or cognitively expensive, speech-to-text removes the bottleneck. It bypasses the moment where your brain has a thought, your fingers cannot keep up, and the thought evaporates before it hits the page. It does not write your argument. It stops the argument from being strangled at the point of transmission.

Working the other way is text-to-speech, and this is where people who do not need it tend to underestimate it. Tools like Read&Write from Texthelp (R&W) read text aloud and highlight as they go. For dyslexia, visual stress, ADHD, or certain auditory processing differences, a dense PDF can feel like trying to decode a ransom note while the paper keeps sliding away from you. Text-to-speech stabilises the pace, and the highlighting keeps your place when your eyes want to skid sideways into the margins. It turns reading into something closer to listening, which for many neurodivergent brains is simply easier. It is not cheating. It is access.

Then there are lecture capture and transcription tools, and this is where the “AI debate” most obviously forgets what a lecture actually is. A lecture is an accessibility obstacle course disguised as teaching. You are expected to listen, understand, identify what matters, and write it down in real time, usually while slides flicker past as if trying to escape. If you have ADHD, dyslexia, auditory processing differences, or anxiety that spikes when a lecturer accelerates, being told “just take notes” is not advice. It is a trap with a PowerPoint.

Genio (formerly Glean) is basically a replay button for education. It records lectures, supports transcription, and lets you return to the sentence you missed because your brain was busy managing pace, noise, sensory load, social pressure, and the mild terror of falling behind. It does not attend the lecture for you. It gives you access to the lecture you did attend, in a format your brain can actually process. You still have to do the learning. The software just stops the learning from being lost to timing.

Scholarcy does something similar for reading. It takes an academic article and produces a structured summary: the argument, key points, evidence, and headings that matter. It gives you a map. If your executive function is on strike, or your processing speed turns a forty-page paper into a wall you cannot climb, that map is not cheating. It is how you begin. It turns “I cannot face this” into “I can start here, then verify it against the original.” The verification matters. The authorship remains yours. What Scholarcy reduces is the cost of entry.

A lecture is an accessibility obstacle course disguised as teaching. You are expected to listen, understand, identify what matters, and write it down in real time, usually while slides flicker past like they are trying to escape.

If you want a single sentence that sums up assistive tools: they do not do the thinking, they make thinking possible.

This is why spelling and grammar tools matter, too. Proofreading is not glamorous. It is error-catching. Dyslexia does not stop you from having ideas. It stops those ideas arriving on the page in a way neurotypical readers find acceptable. For years, Grammarly sat fairly comfortably in the “proofreading layer” category: spelling, punctuation, grammar, clarity. For many disabled students, it was not a cheat code. It was the difference between a marker engaging with an argument and being distracted by surface noise.

But Grammarly is also the clearest example of why this debate became messier. The line blurred, not because disabled students collectively decided to start cheating, but because products evolved. Grammarly introduced “tone rewrite suggestions” in 2022, moving from correction into reshaping phrasing for effect. Then GrammarlyGO arrived in March 2023, explicitly generative: helping draft, rewrite, and produce text in context. From a product perspective, that drift makes sense. From an accessibility perspective, it created a trap. A tool that once caught errors now also has the capacity to reshape prose in ways that edge toward authorship.

Universities responded with suspicion. Policymakers responded with blunt instruments. Disabled students got the thrilling job of being caught in the crossfire.

This is where the receipts matter. In February 2025, the Student Loans Company (SLC) issued Student Support Information Note SSIN 04/25, notifying universities, needs assessors, and practitioners that the Department for Education had decided to remove non-specialist spelling and grammar software from Disabled Students’ Allowance (DSA) funding, on the grounds that free-to-access versions now exist. The formal DSA guidance then set the implementation line: from 17 March 2025, DSA would not fund non-specialist spelling and grammar software, other than in exceptional circumstances. Parliament later clarified that this policy applies to new DSA recipients only and does not affect spelling and grammar software already awarded to existing students.

Taken in isolation, you can argue that the decision is about public money and free alternatives. Taken alongside what commercial tools have become, it reads differently. Disabled students are now expected to navigate increasingly technical distinctions between “acceptable” and “unacceptable” assistance, often without clear institutional guidance. You are told to be transparent, but transparency can become a liability if staff do not share a consistent understanding of where proofreading ends and authorship begins.

And, bluntly, that suspicion is not new. Accessibility has always been treated as morally dubious in a way that says more about the institution than the student. Extra time at GCSE gets framed as an unfair advantage rather than a levelling of the playing field. DSA-funded technology is still described, in some corners, as a perk, as if specialist support were a luxury rather than the scaffolding that prevents students from falling through the floor. The baseline is assumed neutral, the neurotypical workflow is treated as “normal”, and anything that compensates for impairment gets quietly reframed as cheating. The moral panic changes outfits, but it keeps the same skeleton.

That suspicion is not new. Accessibility has always been treated as morally dubious in a way that says more about the institution than the student.

AI has simply become the newest stage of that argument. Because the word now covers both assistive tools and generative engines, disabled students have been pulled into a debate that was never designed with us in mind. Tools that were once understood as accessibility are now treated as suspect by default because they share a label with systems that can generate entire paragraphs on command.

There is another twist, too, and it is particularly grim if you are neurodivergent. A lot of the so-called “AI tells” that staff are now trained to look for are, inconveniently, the exact linguistic habits many neurodivergent students already have. If generative models have made academics suspicious of polished prose, some students are now tempted to write worse on purpose just to look convincingly human. When universities rely on stylistic pattern-spotting to police writing, they risk pathologising the very students who already sit at the margins of academic norms. When a narrow, neurotypical writing style becomes the benchmark for authenticity, everyone else becomes suspicious by default. That is not integrity. That is aesthetic policing.

This matters even if you do not care about disability policy in the abstract, because AI is already embedded in student study culture. A YouGov survey published in September 2025 found that 66 per cent of UK university students reported using AI for work or study towards their degree, including 33 per cent who said they did so at least weekly. In other words, this is not fringe behaviour. It is routine. The question is not whether AI should exist in universities. It already does. The question is whether institutions can distinguish between assistance and substitution without turning accessibility into collateral damage.

Institutional clarity matters here, and it is currently patchy. Jisc’s “Staff perceptions of AI 2025”, drawing on Jisc’s Leadership Survey 2025, reported that only 37 per cent of UK higher education institutions had delivered staff development on AI. Policy without training produces uneven enforcement and confusion. For disabled students who rely on assistive technology, that ambiguity is not theoretical. It is risky.

So where does that leave us, in practical terms, rather than in culture-war terms?

The question is not whether AI should exist in universities. It already does. The question is whether institutions can distinguish between assistance and substitution without turning accessibility into collateral damage.

It leaves us with a distinction that is not hard to articulate but is strangely hard for institutions to operationalise. Assistive tools translate you. They help you access information, access your own words, and access the classroom. Speech-to-text gets your language out. Text-to-speech gets the text in. Lecture capture gives you time-shifted access to teaching. Structured summarisation lowers the cost of entry to dense reading so you can then verify it properly. Proofreading catches surface errors so your ideas can be heard.

Generative tools can do something else entirely. They can generate the argument-shaped thing itself. Some features can be used in accessibility-adjacent ways: simplifying dense prose, helping you interrogate your own notes, supporting restructuring when executive function collapses. But usefulness is not the same as legitimacy. The hinge is authorship. If a tool begins doing the reasoning, synthesis, or writing that an assignment is meant to assess, it has crossed the line.

The difficulty is that disabled students are often the first adopters and the first casualties. We adopt because we must. We get caught because the policy is vague. We are expected to be transparent, but transparency becomes dangerous when an institution cannot distinguish between access-supporting assistance and content substitution. We are asked to prove we are not cheating by using the mechanisms that make participation possible.

Here is the uncomfortable truth: disabled students were using “AI” long before anyone called it AI. We were using it when it was just software, or support, or the thing that let us get through a lecture without falling behind. We were using it quietly, competently, and without controversy. Then the label changed, and suddenly the tools that made education accessible were recast as threats.

AI did not make disabled students dependent. It made us possible. The panic came later. And if universities cannot tell the difference between a tool that helps a student think and a tool that thinks for them, the problem is not the technology. The problem is the institution. Until higher education learns to distinguish access from automation, disabled students will keep paying the price for a debate that was never about us in the first place.

Cover image: Franz26 via Pixabay


Leave a comment