
Howard Krieger
Director
Accessibility and the SHAFT Way
AI is simultaneously widening the tech gap and offering the best tools available to close it, especially for people with disabilities and those already on the wrong side of the digital divide. That tension makes accessibility one of the defining ethical and design challenges of this AI cycle.
The new accessibility paradox
As AI systems become embedded in education, work, healthcare, and civic life, people without reliable devices, connectivity, or assistive tech risk deeper exclusion than in earlier digital waves. As AI researchers Adekunle Osonuga and colleagues put it in healthcare, advanced tools can “demonstrate significant success” for underserved groups, but only if those groups actually get access.
Xiaohui Huang, writing on GeoAI, warns that concentrating powerful AI capabilities in already‑advantaged institutions “risks entrenching power” and widening “socioeconomic, educational, and infrastructural divides.” The same logic applies broadly: AI can amplify who already has bandwidth, literacy, and capital, unless accessibility is treated as a first‑order design constraint.
Accessibility expert Sheri Byrne‑Haber captures the cultural side of the problem: “Accessibility is not a problem to be solved. Accessibility is a culture to be built.” When AI products ship without that culture, they tend to codify existing barriers—through opaque interfaces, biased data, and paywalled “premium” capabilities that those who most need them cannot afford.
How AI is already expanding access
Despite the risks, disabled technologists and accessibility advocates are also clear that AI is unlocking kinds of access that were impossible at scale even a decade ago. Google’s work is a visible example: since 2009 it has used AI for automated captioning on YouTube, and today tools like Live Transcribe and Lookout provide real‑time transcription and AI‑powered scene descriptions for people who are Deaf, hard of hearing, blind, or low‑vision.
Accessibility specialist Molly Watt argues that inclusive design must “focus on people with a variety of needs, not just disabled people,” because features like captions, transcripts, and simplified language improve experiences for everyone. AI turbocharges that idea:
Computer vision now powers image description and facial recognition tools that can describe scenes, read text, and even interpret facial expressions for blind users.
Natural‑language models summarize dense content and simplify reading level, which can be critical for people with cognitive disabilities, ADHD, or limited language proficiency.
AI‑driven navigation apps highlight accessible routes and facilities, reducing friction in daily life for people with mobility challenges.
As one accessibility article notes, AI‑based assistive tech is “removing the barriers that are often present in the lives of the disabled community” and doing so at falling cost, which is essential if these tools are to reach beyond a narrow elite.
When AI deepens the digital divide
The same advances can also widen the gap between those who can use AI fluently and those who can’t. An EDUCAUSE analysis on AI in higher education warns that paid generative‑AI tools risk “increasing disparities in digital access and literacy” unless low‑income students receive no‑cost access to full versions. Without that, AI becomes another advantage that tracks existing wealth and privilege.
Education researchers working on AI and digital equity frame the risk starkly: AI could “widen the gap between the digitally empowered and the digitally excluded” if access is left to market forces and infrastructure alone. Early evidence they cite includes:
AI tools rolled out in well‑resourced schools long before rural or underfunded districts,
language‑model interfaces that assume advanced literacy, and
“free” tiers that are meaningfully weaker than paid ones, creating a two‑tier learning ecosystem.
GeoAI researchers make a parallel point: when only institutions in rich regions can afford high‑end models, data, and compute, they effectively control the “spatial narratives and decisions” that AI informs, leaving marginalized communities as data sources rather than decision‑makers. This pattern repeats in credit scoring, hiring, and public services, where opaque models can reproduce or intensify existing bias if not deliberately checked.
Why AI is also the best tool to fix what AI breaks
Ironically, many experts in accessibility and digital inclusion argue that AI itself is the most promising way to mitigate the inequities it is amplifying, if it is designed and governed with inclusion in mind. Deque’s accessibility team notes that assistive technology “when used with content meeting accessibility standards, can help people with invisible disabilities” as well as more visible impairments. AI can push that assistive layer much closer to real‑time, personalized support than previous generations of tools.
Examples of AI being used to counteract AI‑driven gaps include:
Cost and access models: EDUCAUSE authors propose that AI companies provide full paid versions of generative‑AI tools to financially needy students to “help level the playing field” in digital literacy and opportunity.
Bias‑aware design: GovTech highlights admissions tools that strip out names, dates, and ZIP codes, focusing instead on non‑cognitive traits like persistence and critical thinking to reduce bias in university admissions.
Infrastructure targeting: Research on AI and the digital divide suggests using AI itself to map where connectivity, devices, and literacy gaps are worst, helping governments and NGOs target interventions and investments.
As one Oracle‑curated quote from AI investor Joanne Chen puts it, “AI is good at describing the world as it is today with all of its biases, but it does not know how the world should be.” That distinction is crucial: letting AI describe current inequities can be powerful for diagnostics—identifying who is left out and where—but deciding how the world should be requires human values, law, and community input.
Designing AI accessibility as a system, not a feature
Accessibility leaders stress that inclusion will not emerge from technical fixes alone. AbilityNet quotes accessibility advocate Susan Scott‑Parker: “For most organisations, diversity and inclusion does not make disability a priority,” which implies that accessible AI will require a shift in governance, incentives, and culture as much as in code.
Several themes recur across accessibility and digital‑divide research:
“Accessibility is culture”: Byrne‑Haber’s framing means that AI teams must include disabled people and other marginalized groups from the outset, not as late‑stage testers of an already‑shaped system. Alistair Duggin of the UK Government Digital Service warns that “every design decision has the potential to include or exclude people,” a principle that applies acutely when decisions are encoded into models that scale globally.
Diverse teams and participatory design: UX leaders like Katy Arnold argue that “a diverse team is more likely to design for diversity well,” so AI organizations must change who is in the room—both in building models and in setting policy. Per Axbom’s challenge—“Whose voices are heard? Whose voices are missing? Why?”—is a practical test for any AI accessibility roadmap.
Policy and resource alignment: GeoAI scholars argue that mitigating AI‑driven divides “demands more than technological fixes; it necessitates inclusive policy frameworks, equitable resource distribution, and the deliberate cultivation of widespread [AI] literacy.” Without public investment and regulation, accessible AI will remain patchy and fragile.
In this view, AI is not automatically emancipatory or oppressive; it is an amplifier. Left alone, it amplifies existing inequities. But used intentionally—with disabled people and other excluded communities setting requirements and co‑designing solutions—it can amplify access instead: reading and writing support, live translation, multimodal interfaces, navigational guidance, and personalized learning that adapt to people rather than forcing people to adapt to systems.
The irony is that AI has created the steepest version of the digital divide the world has yet seen, but it is also the most powerful toolkit available for making technology—and by extension education, work, and public life—more accessible than they have ever been. Whether it does one or the other will depend less on what the models can do, and more on who they are built for, who is at the table, and whether accessibility is treated as core infrastructure rather than an optional feature.


