Blog/Quality Assurance

Why Your AI-Powered Product Might Be Excluding a Billion Potential Users (and How to Fix It)

Ursula Koski, CTO for Nordics Partners at AWS, speaking on the Tech Effect podcast

The problem: AI is accelerating but accessibility isn't keeping up

We're in the middle of an AI acceleration that's unlike anything the tech industry has seen before. Generative AI, machine learning, and large language models are moving from experimental tools to core product features at a pace that catches even experienced technology leaders off guard.

But there's a gap that's widening alongside this acceleration. While AI capabilities are advancing monthly, the accessibility of AI-powered products isn't keeping pace. As Ursula Koski, CTO for Nordics Partners at AWS, explains on a recent episode of the Tech Effect podcast: when it comes to accessibility, "We are in the beginning phases. We are kind of just leaving the start line, and we don't yet understand what the options are for how we enable people more."

This matters because the numbers are staggering. Approximately 16% of the global population, over a billion people, lives with some form of disability or condition that affects how they use technology. That's not a niche edge case. It's a market segment larger than the population of North America and Europe combined.

And yet, most digital products are built, tested, and shipped with the assumption that users will interact with them the way the development team does: full vision, full hearing, full motor control, neurotypical cognition, stable high-speed internet. When that assumption is wrong, and it's wrong far more often than most teams realize, the product fails silently. Users don't file bug reports about inaccessible interfaces. They just leave.

TL;DR

30-second summary

Why are AI-powered products failing over a billion potential users — and what should product teams do about it?

According to Ursula Koski, CTO for Nordics Partners at AWS, speaking on the Tech Effect podcast:

  1. Accessibility is a market opportunity, not a compliance checkbox. Over a billion people globally need some form of accommodation to use digital products. Add temporary and situational impairments and the addressable audience grows significantly larger — making accessibility a revenue issue, not just a fairness one.
  2. AI is both the solution and the risk. Voice interfaces, remote presence technology, and speech pattern analysis are already removing real barriers for users with disabilities. But AI trained on biased data can just as easily create new ones.
  3. Data bias is the single biggest accessibility risk in AI products. Every dataset reflects the assumptions of the people who built it. If your training data underrepresents certain populations, your model will fail those users — sometimes dangerously, as in healthcare or safety-critical applications.
  4. Retrofitting accessibility is exponentially harder than building it in from the start. This applies to training data, UI design, workflows, and QA processes. Biased models are far harder to correct post-deployment than biased interface assumptions.
  5. The European Accessibility Act changes the compliance landscape. A single standardized framework now applies across all EU member states, with enforcement mechanisms that are expected to expand. For AI-powered products, this makes accessibility a regulatory requirement, not an aspiration.
  6. Diverse testing teams catch what homogeneous ones miss. If everyone on your testing team interacts with the product the same way, you will miss the failure modes that affect users who don't. Real accessibility testing requires real diversity of perspective.

Bottom line: AI acceleration without accessibility investment creates products that exclude a billion potential users, expose businesses to measurable revenue loss and regulatory risk, and embed bias that becomes harder to correct over time. According to Ursula Koski on Tech Effect, the organizations that get this right build accessibility in from the start, audit their data actively, and test with teams that reflect the full range of people they're building for.

The business case that goes beyond compliance

The fairness argument for accessibility is clear: everyone deserves equal access to digital products and services. But when you're making the case to senior leadership or allocating budget, the business argument needs to be just as sharp.

Koski frames it in three dimensions that any business leader can evaluate:

Expanded market share 

A billion people globally may need some form of accommodation to use your product. If your product isn't accessible, you're not just excluding a small minority, you're leaving an enormous addressable market on the table. And it's not only people with permanent disabilities. Temporary impairments (a broken arm, recovering from eye surgery), situational limitations (bright sunlight on a screen, a loud environment), and age-related changes all push users into needing accessible features. When you build for accessibility, you serve a far larger audience than you initially target.

Brand perception and trust 

How your brand is perceived on accessibility matters to consumers, to enterprise procurement teams, and to regulators. Being known as an accessible platform is a competitive differentiator, especially in sectors like healthcare, education, and government where accessibility isn't optional.

Reduced business risk 

If your product is safe and functional for the majority but not for everyone, that gap represents quantifiable business risk. In some cases, it leads to regulatory penalties. In others, it means lost revenue from customers who simply can't complete a purchase or use a service. As Koski notes, that risk "actually amounts to money, and in some cases, extreme cases, you might even see a loss of business."

The e-commerce example makes this tangible. During a TestDevLab digital accessibility conference (Quality Forge), a speaker with a visual impairment shared how he tried to buy a pair of boots from a popular brand's online store, and simply couldn't complete the purchase. He eventually went to a physical store. But most customers in that situation won't make the trip. They'll go to a competitor whose site works for them. Multiply that by thousands of transactions over a year, and the revenue impact becomes significant.

Real examples of how AI is already enhancing accessibility 

The acceleration in AI isn't just creating challenges for accessibility, it's also creating powerful new tools. Here are some of the ways AI is already being applied to make products more accessible, drawn from real-world examples Koski shared from her work at AWS and with partners across the Nordics.

Voice interfaces as accessibility infrastructure

Voice assistants like Amazon's Alexa started as convenience features, but they've become genuine accessibility tools. The ability to control your environment through speech—turning on lights, setting reminders, managing routines—is transformative for users with motor impairments, vision loss, or cognitive differences.

One particularly compelling use case: parents of children with autism can set up structured routines through Alexa. Instead of the repeated, high-pressure reminders ("this is the fifth time I've asked you to brush your teeth"), the voice assistant handles the prompting, reducing conflict and supporting the child's autonomy. It's a case where a feature designed for general convenience turns out to be deeply valuable for accessibility, once you look beyond the typical user profile.

Photorealistic vision and remote presence

VR and AR technologies are enabling what Koski describes as "teleportation." Specifically, the ability to be present at a remote location without physically traveling there. For someone who uses a wheelchair or has mobility limitations, the ability to inspect a manufacturing site, attend a meeting, or visit a location through high-fidelity VR removes a barrier that no amount of web accessibility compliance could address.

Companies like Finland's Varjo are developing headsets with photorealistic visual quality that make remote presence genuinely useful for professional work, not just entertainment. The accessibility implications are significant: physical presence is no longer a prerequisite for participation.

Speech pattern analysis in mental health

In the e-health space, AI tools are beginning to analyze speech patterns to detect indicators of conditions like depression. These tools don't replace clinical judgment, they augment it. A psychiatrist conducting an initial assessment can use AI analysis as an additional signal, identifying patterns in speech that correlate with depression and prompting more targeted follow-up questions.

This is still early-stage, but the results Koski has seen from partners in this space are promising: the tools are proving "quite correct" in identifying potential indicators, giving medical professionals an additional tool in their diagnostic process.

Your AI works in the demo. Does it work for everyone in production? 

Voice interfaces, captioning tools, and AI assistants all have failure modes that standard QA won't catch, like degraded accuracy with accented speech, hallucinations in summaries, and captions that drift out of sync. TestDevLab's AI testing services cover the full stack.

The hidden risk: When AI creates new barriers

AI's potential to enhance accessibility comes with a significant limitation: if the data and processes behind AI tools carry bias, those tools can create new barriers instead of removing existing ones.

The data bias problem

Data always has a bias because it's built by humans. 

This is the risk Koski emphasizes most. Every dataset reflects the perspectives, demographics, and assumptions of the people who collected and labeled it. When that dataset is used to train an AI model, the bias gets encoded into the model's behavior. The consequences can range from inconvenient to dangerous. Consider:

  • Healthcare data gaps. If training data for a medical AI tool doesn't adequately represent diverse populations, like different ethnicities, ages, body types, and genetic backgrounds, the tool may produce unreliable results for underrepresented groups. In clinical settings, this isn't just a fairness problem, it's a safety risk. Certain drugs and treatments affect different populations differently, and a model trained on unrepresentative data may miss these variations.
  • Consumer hardware blind spots. Does your fitness tracker or health monitoring device work accurately on all skin tones? Early versions of pulse oximeters and heart rate monitors were notoriously less accurate on darker skin. If the training data and testing protocols don't account for this, you ship a product that works for some users and fails for others, without knowing which users are affected.
  • The crash test dummy lesson. Koski draws a parallel to automotive safety testing. For decades, crash test dummies were built to the dimensions of an average male. The result was that women were significantly more likely to die in car accidents, because the safety systems weren't designed or tested for their body types. The same pattern plays out in AI. If you only train and test with data that represents the majority, you build a product that's unsafe for everyone else.

What to do about it

The solution isn't to pretend bias can be eliminated—it can't. The solution is to make bias visible, measurable, and manageable.

  • Retrain data scientists. Teams that have worked with machine learning for years may not have been trained to identify and mitigate data bias. This is an education gap that needs active investment, not just awareness.
  • Audit your training data. Look for representation gaps. Are different populations, conditions, and use contexts adequately represented? If not, consider whether synthetic data can fill the gaps, not to remove bias entirely, but to correct the most dangerous imbalances.
  • Use established frameworks. Universities across the Nordics and Baltics are developing ethical frameworks for evaluating data bias. And mature tools already exist. Koski highlights TPGI's Accessibility Resource Center, which is used by organizations including the US intelligence community—not because they couldn't build their own testing framework, but because the quality of an established, vetted tool was higher than what they could build from scratch.
  • Test with diverse teams. Diversity of mind in your testing team means diversity of perspective on what "working correctly" actually looks like. If your testing team all interacts with the product the same way, you'll miss the failure modes that affect users who don't.

Building accessibility in from the start

One of the most consistent themes across accessibility expertise is that retrofitting is exponentially harder than building right from the beginning. This applies to AI-powered products just as much as traditional software, arguably more, because biased training data is much harder to correct after a model is deployed than biased UI assumptions.

Here is Koski's practical framework for organizations:

Understand the regulatory environment 

In the US, Sections 752 and 508 establish accessibility requirements with reporting mechanisms. In the EU, the European Accessibility Act is about to create a standardized set of rules across all member states. These aren't static requirements - they're "living entities" that will evolve, and the reporting mechanisms will likely expand.

Map your business risk 

Accessibility gaps aren't just compliance risks. They're customer experience risks, brand risks, and in some sectors, safety risks. Quantify what you're exposed to.

Don't build everything from scratch 

Use established testing tools and frameworks rather than reinventing them. Accessibility testing platforms, data bias detection tools, and quality assurance frameworks already exist at a maturity level that's hard to replicate internally. Partnering with specialized testing providers gives you a head start and third-party credibility.

Distinguish usability from accessibility 

These are related but different. A product can be usable for the majority but not accessible for everyone. Koski gives the example of color blindness: there are six different types, and a traffic light system that's perfectly usable for most people can be dangerous for someone who can't differentiate between the colors, especially when the light order varies by location.

Test the full workflow, not just individual screens 

Accessibility failures often occur in transitions. For instance, between pages, between steps in a checkout process, and between input modes. Test the complete user journey, not just static interface elements. As Koski describes: a booking site where a small typo cancels payment but leaves the seats locked creates a blocker that repeats for every user who encounters it. The right testing tools catch these patterns before they become systemic customer experience failures.

What the European Accessibility Act means for your product

The European Accessibility Act (EAA) establishes a standardized set of accessibility requirements across all EU member states. This is significant for two reasons:

First, it eliminates the patchwork of country-specific rules. Whether your users are in France, Finland, Latvia, Germany, or Italy, the requirements are the same. For businesses operating across multiple European markets, this simplifies compliance—one standard to meet, not twenty-seven.

Second, enforcement is coming with teeth. Reporting mechanisms are being added, and Koski expects these will expand over time. This isn't a guideline you can treat as aspirational. It's a regulatory requirement with consequences for non-compliance.

For companies building AI-powered products, the EAA means accessibility needs to be part of your development process, not a post-launch audit. If your AI tool introduces bias that makes the product inaccessible to certain groups, that's a compliance issue under the EAA, not just a product quality issue.

Key takeaways for product teams

Accessibility is a market opportunity, not just a compliance burden. Over a billion people globally need some form of accommodation. Add temporary and situational impairments, and the addressable audience is even larger. Building accessible products expands your market.

AI is both the opportunity and the risk. AI tools can dramatically improve accessibility - through voice interfaces, remote presence, speech analysis, and more. But AI trained on biased data can just as easily create new barriers. Both sides need active management.

Data bias is the single biggest accessibility risk in AI. You can't fully eliminate it, but you can identify it, measure it, and mitigate it. Invest in data auditing, diverse training datasets, and team education on bias detection.

Start accessible, don't retrofit. Integrating accessibility from the beginning of your development process is dramatically cheaper and more effective than fixing it later. This applies to your training data, your UI, your workflows, and your testing.

Use established tools and partners. Accessibility testing, data bias detection, and regulatory compliance frameworks already exist at high maturity. Don't build from scratch what you can adopt from specialized providers.

Test with diverse teams and real user perspectives. Diversity of mind in your testing process catches failure modes that homogeneous teams miss. Include people with disabilities in your testing, their firsthand experience reveals issues that automated tools alone cannot.

Listen to the full conversation with Ursula Koski on the Tech Effect podcast

FAQ

Most common questions

How many people are affected by inaccessible digital products?

Approximately 16% of the global population, over a billion people, lives with some form of disability or condition that affects how they use technology. That figure grows further when you include temporary impairments such as injuries, situational limitations like bright sunlight or loud environments, and age-related changes. Inaccessible products don't affect a niche minority; they exclude a market segment larger than the combined populations of North America and Europe.

How does data bias in AI create accessibility problems?

AI models learn from training data, and that data reflects the perspectives and demographics of the people who collected it. When certain populations are underrepresented, like different ethnicities, body types, age groups, or disability profiles, the model produces unreliable or harmful results for those users. Examples include pulse oximeters that perform less accurately on darker skin tones and healthcare AI tools that miss treatment variations relevant to underrepresented groups. Bias cannot be fully eliminated, but it can be identified, measured, and mitigated through data auditing and diverse training sets.

What does the European Accessibility Act require from product teams?

The European Accessibility Act establishes a single standardized set of accessibility requirements across all EU member states, replacing the previous patchwork of country-specific rules. For companies building AI-powered products, it means accessibility must be integrated into the development process, not addressed in a post-launch audit. Enforcement mechanisms are already in place and expected to expand, making non-compliance a regulatory and financial risk rather than a reputational one.

Why is it harder to fix accessibility problems in AI after deployment?

With traditional software, an inaccessible interface can often be redesigned and retested. With AI, bias embedded in training data affects model behavior at a fundamental level, and retraining a deployed model is significantly more costly and complex than correcting a UI assumption before launch. The same principle applies to workflow design and QA processes. Starting with accessibility as a requirement is dramatically cheaper and more effective than retrofitting it later.

How is AI already being used to improve accessibility in real products?

Several meaningful applications are already in production. Voice interfaces like Amazon Alexa have become genuine accessibility infrastructure for users with motor impairments or cognitive differences, including structured routine management for children with autism. VR and AR technologies are enabling remote presence for users with mobility limitations, removing the requirement for physical travel to participate in professional settings. In mental health, AI tools are beginning to analyze speech patterns for indicators of conditions like depression, giving clinicians an additional signal without replacing their judgment.

A billion potential users is too large an audience to test wrong.

Accessibility failures and bias-driven edge cases don't always show up in standard QA. We help product teams test the full user journey, across the full range of users who matter.

QA engineer having a video call with 5-start rating graphic displayed above

Save your team from late-night firefighting

Stop scrambling for fixes. Prevent unexpected bugs and keep your releases smooth with our comprehensive QA services.

Explore our services