Blog/Quality Assurance

Software as a Medical Device (SaMD)

Doctor using a blood glucose monitor on patient

Healthcare is becoming inseparable from software. A survey shows that 81% of healthcare executives believe technology will reshape their business within the next three years, and software already powers everything from AI-assisted diagnostics to continuous glucose monitoring. The global digital health market, which includes SaMD, is projected to reach over $1 trillion by 2030, growing at a CAGR of more than 18%. That scale brings both opportunity and pressure. If a ride-hailing app crashes, a user might miss a trip. If SaMD malfunctions, it can compromise a diagnosis, treatment, or even patient safety.

The stakes are evident in the data. In 2023 alone, U.S. healthcare data breaches affected more than 133 million people. Meanwhile, the FDA has reported a steady rise in recalls involving software-related failures in medical devices, with cybersecurity vulnerabilities and incorrect outputs among the leading causes. In Europe, regulatory scrutiny is tightening under the MDR, especially for AI-driven clinical decision-support systems, which are increasingly falling into higher-risk categories.

For product leaders and QA teams, this convergence of demand, regulation, and risk means SaMD isn’t just another “digital product.” It’s a regulated medical technology. Every line of code must withstand the scrutiny of clinical validation, regulatory audits, and real-world reliability. And testing strategies need to reflect that reality: it’s not enough to ensure that an app “works” — you need evidence that it consistently performs as intended, in high-stakes environments, against rigorous safety and security standards.

This blog unpacks what qualifies as SaMD, why it matters to QA and product leaders, how global regulations are evolving, and what a robust testing and compliance strategy looks like in practice.

What counts as SaMD?

The term “Software as a Medical Device” (SaMD) was formalized by the International Medical Device Regulators Forum (IMDRF), which defines it as software intended for one or more medical purposes that performs those purposes without being part of a hardware medical device. That definition may sound straightforward, but in practice, it creates gray areas that product teams and QA leaders need to navigate carefully.

For example, a step-counting fitness tracker app is generally not considered SaMD — it promotes general wellness without a medical claim. But an app that uses accelerometer data to detect signs of Parkinson’s tremors, or one that analyzes cardiac rhythms to flag arrhythmias, is SaMD. The distinction lies in the intended medical purpose stated by the manufacturer. If your software claims to diagnose, monitor, predict, or treat a medical condition, it likely qualifies as SaMD.

This line is becoming increasingly blurred as more wellness apps push into clinical territory. A Deloitte study found that over 60% of healthcare organizations are either piloting or scaling digital tools that leverage AI for decision support. Many of these tools fall into the SaMD category, and the regulatory oversight they attract depends on both their clinical significance and risk to the patient if the software misbehaves.

The IMDRF risk categorization framework is especially important here. It looks at two dimensions:

  • Significance of the information provided by the software (is it critical for making a diagnosis or just supportive information?)
  • State of the healthcare situation (are users in a life-threatening emergency, or is the condition chronic and stable?)

These factors influence how regulators classify the device, which in turn dictates the level of evidence, testing, and quality assurance required. For QA teams, this means risk-based testing strategies are a standard that must be upheld.

In short: not every healthcare app is SaMD, but if your product crosses into clinical decision-making or therapeutic action, you’re operating in a highly regulated space where software quality is directly tied to patient safety.

Why SaMD matters to QA and product leaders

When you develop a consumer app, the worst-case scenario is usually lost revenue, reputational damage, or unhappy users. With Software as a Medical Device, the stakes are higher: a software malfunction can directly impact diagnosis, treatment, and patient safety. That reality fundamentally changes how product leaders and QA teams must approach the entire development lifecycle.

1. Patient safety is the ultimate priority

Every SaMD project must begin with risk management. Under ISO 14971, risks need to be identified, quantified, mitigated, and continuously monitored throughout the product lifecycle. Unlike traditional apps, the bugs here are not just annoyances. In this context, bugs can cause delayed treatments, misdiagnoses, or even fatalities. For instance, in 2022, the FDA issued recalls for insulin pumps and related software due to algorithmic malfunctions that could lead to overdosing or underdosing patients. For QA leaders, this reinforces the need for exhaustive testing, risk traceability, and validation against real-world edge cases.

2. Regulatory discipline drives the SDLC

Standards like IEC 62304 are the backbone of software lifecycle compliance. They require everything from design traceability to rigorous anomaly handling and post-market surveillance. This means QA managers need to think beyond unit or integration testing and establish compliance-aware testing pipelines. Automated test reporting, version control tied to requirements, and independent verification steps are evidence you’ll need to show regulators.

3. Usability is now a safety issue

In consumer software, poor UX leads to frustrated users. In SaMD, poor UX can cause clinical mistakes. IEC 62366-1 places usability on par with safety testing, requiring that developers account for human factors in clinical environments. Imagine a mobile app used in emergency care: if critical alerts are buried in cluttered interfaces or if navigation takes too long, patient outcomes can be compromised. QA and product leaders must validate not only that the software “works,” but that it can be used safely and intuitively by clinicians and patients under pressure.

Woman using a health app

4. Cybersecurity and privacy are non-negotiable

The healthcare sector is the top target for cyberattacks, with IBM’s 2024 Cost of a Data Breach report citing healthcare as the most expensive industry for breaches at an average of $11 million per incident. For SaMD providers, this is more than an IT concern — regulators now expect proactive threat modeling, secure coding, vulnerability disclosure processes, and post-market monitoring. This means that the security testing strategy must include penetration testing, data integrity validation, and verification of secure update mechanisms.

5. Market differentiation depends on quality

Beyond compliance, quality is becoming a competitive differentiator. Hospitals, insurers, and regulators are scrutinizing not just whether SaMD meets minimum requirements, but how consistently it performs in real-world settings. Software that ships faster but fails in clinical trials or post-market surveillance can delay approvals or even trigger recalls. On the flip side, SaMD products with robust QA practices and demonstrable safety records are more likely to win trust from healthcare providers and scale globally.

For product leaders, the message is clear: SaMD isn’t just a technical challenge; it’s a strategic one. Your testing and QA practices are about ensuring functionality, on top of ensuring trust, compliance, and ultimately, patient safety.

The global regulatory picture

One of the biggest challenges with SaMD is that it doesn’t exist in a single regulatory ecosystem. A SaMD product intended for global markets must often satisfy requirements in both the European Union and the United States — two regions that share common principles but differ in details. These differences can significantly affect timelines, testing priorities, and go-to-market strategies.

European Union (EU)

The EU’s Medical Device Regulation (MDR) was a turning point for SaMD. Rule 11 of the MDR makes most diagnostic and therapeutic decision-support software at least Class IIa, with higher-risk applications (for example, life-supporting or critical diagnostic tools) classified as Class IIb or even III. This reclassification has pulled many software tools that previously escaped scrutiny into higher regulatory categories.

In June 2025, the Medical Device Coordination Group (MDCG) issued updated guidance (MDCG 2019-11 Rev.1), clarifying the qualification and classification of software, including AI-based systems. Notably:

  • The intended purpose stated by the manufacturer remains the key factor in classification.
  • AI-enabled diagnostic software is likely to fall into higher classes due to the unpredictability of algorithmic performance.
  • Wellness or administrative apps are explicitly excluded unless they cross into clinical territory.

This means more stringent verification, validation, and clinical evaluation requirements. SaMD providers selling in the EU must prepare detailed technical documentation, demonstrate traceability from risks to test cases, and prove usability and cybersecurity safeguards. Post-market surveillance obligations are also more demanding, requiring ongoing monitoring of software performance in real-world use.

United States (US)

The U.S. Food and Drug Administration (FDA) takes a risk-based approach that closely aligns with IMDRF principles. SaMD can fall into Class I, II, or III depending on the significance of the information it provides and the severity of the condition it addresses.

In June 2025, the FDA finalized its premarket cybersecurity guidance, which now requires manufacturers to:

  • Submit a Software Bill of Materials (SBOM) with detailed information on third-party components.
  • Demonstrate secure design, coding, and vulnerability management practices.
  • Provide clear plans for post-market updates and coordinated vulnerability disclosure.
  • Show evidence of threat modeling and penetration testing.

This shift reflects growing concern about cybersecurity as a patient safety issue, especially after high-profile ransomware attacks have disrupted hospitals and delayed treatments. For QA leaders, this raises the bar — penetration testing, threat modeling, and secure update validation are no longer optional but expected as part of regulatory submissions.

What this means for product and QA teams

  • Earlier planning is critical: Waiting until late-stage development to consider EU MDR or FDA requirements is a recipe for delays. Product managers should align regulatory and QA strategies early in the roadmap.
  • Testing strategies must be risk-driven: Higher-risk classifications demand stronger clinical evidence, more rigorous verification/validation, and usability testing under realistic conditions.
  • Cybersecurity is part of safety: Both EU and US regulators now view cybersecurity as intrinsic to patient safety. Security testing and documentation are essential to avoid approval bottlenecks.
  • Global launch means dual compliance: A SaMD product aiming for both EU and US markets must satisfy overlapping but distinct requirements. Harmonizing test evidence and documentation across both regions can reduce duplication and speed up approvals.

In short, 2025 is shaping up to be a year where SaMD regulations are catching up with technological realities. For QA and product leaders, the regulatory environment isn’t just a compliance hurdle — it’s a strategic factor that shapes product design, testing priorities, and market access.

Doctor using a laptop

What makes SaMD different from regular health apps?

At first glance, many health-related apps look similar: they collect data, process it, and deliver insights to users. But there’s a fundamental distinction between wellness apps and SaMD: intended purpose. If the software claims to diagnose, monitor, prevent, or treat a medical condition, it crosses into the regulated space of SaMD. That distinction brings with it stricter requirements for testing, documentation, and lifecycle management.

1. Medical claims trigger regulation

  • A calorie-tracking app that helps users maintain weight is a wellness tool.
  • A diabetes management app that recommends insulin dosage adjustments qualifies as SaMD because it directly supports medical treatment.

This means the test strategy must shift from “does it work as designed?” to “does it consistently perform safely and effectively in a medical context?”

2. Risk dictates rigor

Regulators expect the level of testing to match the level of patient risk. A meditation app crashing is inconvenient. A seizure-detection app crashing could be life-threatening. Higher-risk SaMD requires comprehensive verification/validation, independent testing, and post-market surveillance.

3. Evidence requirements are higher

Wellness apps may succeed with user feedback and performance metrics. SaMD must produce objective evidence of clinical safety and performance — often through clinical evaluations, analytical validation, and rigorous usability studies. This evidence is not optional; it’s what regulators and healthcare providers rely on to approve and adopt the software.

4. Usability is not just UX — it’s safety

In consumer apps, a confusing interface frustrates users. In SaMD, poor design can lead to misdiagnosis or mistreatment. Testing must include usability engineering under IEC 62366-1, accounting for real-world conditions: clinicians under time pressure, patients with limited digital literacy, or elderly users with impaired vision.

5. Cybersecurity is part of compliance

While most consumer apps face data privacy laws, SaMD faces stricter obligations. Any vulnerability in medical software could expose sensitive health data or disrupt treatment delivery. Testing, therefore, must integrate penetration testing, data encryption validation, and secure update verification — all documented for regulatory review.

Common SaMD pitfalls (and how to avoid them)

When developing SaMD, some common pitfalls can quietly escalate risks or regulatory burdens if not addressed early. Key areas to watch include:

  • Treating “intended use” as a marketing exercise. Regulators classify devices based on intended use; vague or overbroad claims can push your product into higher risk classes with heavier evidence requirements. Define and tighten claims from the outset.
  • Underestimating use-related risk. Clinical environments are high-pressure and noisy, and even minor usability issues can become safety hazards. Conduct formative studies with realistic users and environments.
  • Bolting on cybersecurity. Security cannot be an afterthought. With large-scale breaches capable of disrupting care, cybersecurity must be incorporated into design, development, and testing from day one.
  • Weak change control. Even small updates to algorithms or datasets can alter clinical performance. Establish regression testing and re-validation processes before shipping updates.
  • Poor real-world monitoring. Without proper observability, teams cannot detect drift, identify anomalies, or respond quickly to incidents. Implement post-market monitoring from the start to close the feedback loop.
Woman using a phone and a laptop

Looking ahead: AI-enabled SaMD

Artificial intelligence and machine learning are transforming SaMD, enabling predictive diagnostics, personalized treatment recommendations, and real-time monitoring. However, AI introduces new testing and regulatory challenges that QA teams must anticipate.

AI-enabled SaMD can learn and adapt over time, which means traditional static testing approaches are not enough. Verification must now include model validation, bias and fairness assessments, robustness to edge cases, and performance monitoring across different patient populations and devices. Continuous learning systems require well-defined change management processes to ensure updates do not compromise safety or effectiveness.

From a regulatory perspective, both the FDA and EU regulators are evolving guidance to address adaptive AI, including requirements for transparency, risk-based validation, and post-market monitoring. Teams must document data sources, training procedures, and performance metrics to demonstrate compliance while maintaining clinical trust.

In the context of QA, this means integrating AI-specific test frameworks, simulation environments, and real-world evidence collection into the SaMD lifecycle. Successful adoption of AI-enabled SaMD requires not just coding and algorithm expertise, but a rigorous testing culture that aligns with patient safety and regulatory expectations.

Conclusion

Software now sits at the point of care. For product managers, decision-makers, and QA leaders, SaMD success means treating quality, safety, security, and usability as interlocking requirements—designed in, tested continuously, and evidenced rigorously. Teams that operationalize standards (ISO 14971, IEC 62304, IEC 62366-1), align early with EU/FDA expectations, and invest in robust verification, validation, and monitoring will ship faster with less regulatory friction—and, most importantly, deliver safer care.

Ready to ship medical-grade software with confidence? Learn more about our software testing and ISO advisory services and how we can help you de-risk your roadmap and accelerate market access.

QA engineer having a video call with 5-start rating graphic displayed above

Deliver a product made to impress

Build a product that stands out by implementing best software QA practices.

Get started today