Launching a connected hardware product without a quality assurance function is one of the highest-risk decisions an IoT company can make, and it is far more common than the industry admits.
For pure software businesses, the cost of deferred QA is measured in bug backlogs and slower release cycles. For companies that manufacture and operate their own IoT hardware in real-world environments, the cost is measured differently: in devices that fail in the field, in user experiences that break in physical space with no remote patch available, and in regression risk that compounds silently across every release cycle until a critical failure makes it visible.
This article addresses that problem directly: what does it actually take to build a QA function for an IoT product from scratch, covering both custom hardware and the consumer-facing mobile application, when you have no testing team, no established process, and no regression baseline to start from?
This article draws on TestDevLab's engagement with Tuul, an Estonian electric micro-mobility company that manufactures and operates its own shared scooter fleet across Tallinn and Riga. Tuul builds its IoT devices in-house, giving it full control over both hardware and software, and full responsibility for validating both. When TestDevLab engaged with Tuul, the company had no QA function of any kind. Read the full Tuul case study for complete details on the engagement and outcomes.
TL;DR
30-second summary
What does it actually take to build a QA process for an IoT product when you're starting from zero?
- Deferring QA in IoT compounds faster than in software—hardware failures manifest in physical space with no remote patch available.
- IoT QA requires covering two surfaces simultaneously: the device hardware and the consumer-facing application, because users experience both as one service.
- Hardware-specific testing domains, like battery, network, GPS, and interruption/recovery, have no direct equivalent in standard software QA and cannot be skipped.
- Building a regression test suite from scratch is the highest-leverage activity in a zero-baseline engagement. Without it, every release carries unquantified risk.
- Structured reporting with root cause analysis and replication data is what converts testing activity into development decisions.
Bottom line: An IoT QA function built to cover both hardware and application surfaces, with regression coverage, hardware-specific test domains, and decision-driving reporting, is a risk management investment whose return compounds with every release.
Why do IoT companies so often defer QA—and what does that cost them?
The decision to defer quality assurance is rarely made explicitly. More often, it accumulates: a founding team moves fast to ship, testing responsibilities fall informally to developers, and the absence of dedicated QA goes unexamined until a failure makes it visible. For software-only products, this pattern carries real costs but manageable ones. For IoT companies, the calculus is fundamentally different.
When Tuul's development team was building out its scooter fleet, the stakes of a quality failure were immediate and physical. A malfunctioning scooter means a rider stranded in a city. A GPS inaccuracy means fleet operations are compromised in real time. A mobile app failure at the unlock screen means a user abandoning the service at the moment of highest intent. None of these failures can be patched remotely overnight; they manifest in physical space, for real users, with no buffer between the defect and its consequence.
The longer a QA function is deferred, the larger the technical debt that accumulates in released products. That debt is not static, it compounds. Every new release built on an unvalidated foundation carries unquantified regression risk, because without a test suite, there is no systematic way to verify that new development has not broken existing functionality. A company in this position is, in effect, operating blind.
What makes IoT QA harder than standard software testing?
Standard software testing deals with a single surface: the application and its behavior under various conditions. IoT QA deals with two surfaces simultaneously, the device hardware and the software application, and both must be validated because users experience them as a single integrated service. A rider using a Tuul scooter does not distinguish between an app failure and a device failure. They experience a broken service.
This dual-surface requirement expands the testing scope substantially. Comprehensive IoT QA must cover not only application behavior across operating systems and device types, but also hardware-specific failure modes that have no direct analogue in software testing:
- Battery performance and endurance. IoT devices operating in field conditions face duty cycles that bench testing cannot fully replicate. Battery validation must confirm that devices remain operational across realistic usage patterns throughout a full day of service.
- Network connectivity and resilience. Urban environments produce highly variable network conditions — areas of strong signal, dead zones, handoff events between cell towers. IoT devices must maintain reliable connectivity and handle degraded or interrupted connectivity gracefully, without data loss or state corruption.
- GPS and location accuracy. For a scooter fleet service, location data is not a nice-to-have — it is a core operational dependency. GPS accuracy testing must account for urban canyon effects, signal multipath interference, and the precision requirements of fleet management systems.
- Interruption and recovery behavior. Devices in the field encounter power events, connectivity drops, and environmental stressors that controlled environments do not produce. Testing must validate how devices behave when interrupted and whether they recover cleanly to a valid operational state.
- Regression coverage across both surfaces. Without a regression test suite, every release carries unquantified risk to existing functionality across both the hardware integration layer and the application. Building this coverage from scratch is labor-intensive, but its absence is a structural vulnerability that grows more dangerous with every release.
The compounding challenge is that all of this testing must be designed and executed by engineers who understand both hardware validation and software QA—a combination of expertise that is not standard in most product engineering teams.
What does a rigorous IoT QA methodology look like in practice?
Building a QA function for an IoT product from scratch requires a specific sequencing: establish the function itself before optimizing any part of it. The instinct to automate or optimize before the foundational process is defined leads to brittle infrastructure and gaps in coverage. The right starting point is defining testing objectives and procedures that give the development team a structured, repeatable process, for both the mobile application and the custom hardware, before any tooling decisions are made.
In practice, a rigorous IoT QA methodology covers the following domains in parallel, because both surfaces feed the same user experience and cannot be validated in isolation.
- Mobile application testing validates that the app behaves as documented across the full range of iOS and Android devices used by real riders. This is not a generic compatibility matrix exercise — it requires testing against the app's documented specifications to identify gaps between specified and actual behavior before those gaps become user-facing failures. For a daily-use service, the mobile application is a critical operational dependency, not a secondary surface.
- IoT hardware testing spans battery, network, location, interruption, and validation testing — each addressing a specific failure mode that, in a deployed fleet, manifests not as a test result but as a device that does not work for a rider who needs it. The objective is to confirm that devices meet quality standards across the real-world conditions they will encounter in operation, not only the controlled conditions of a test bench.
- Regression suite construction is among the highest-leverage activities in a zero-baseline QA engagement. Building a regression test suite from scratch gives the development team a durable quality baseline: a systematic mechanism for verifying that new development has not broken existing functionality, and the foundation for increasingly confident and faster future releases.
- Structured quality reporting closes the loop between testing activity and development decision-making. Root cause analysis and issue replication data give product and engineering teams the information they need to prioritize improvements, not just a list of failures. A reporting cadence that aligns with release cycles ensures that quality status is visible at the moments when decisions are actually made.
How did this approach play out in a real deployment?
Tuul's engagement with TestDevLab illustrates each of these elements in operation. When the engagement began, Tuul had no testing team, no established QA process, and no regression baseline. The work of defining testing objectives, establishing procedures, and integrating QA into the software development lifecycle was the engagement's most consequential output. Not because of what any individual test cycle found, but because it changed how quality was managed across the organization structurally.
Mobile application testing across iOS and Android identified the gaps between documented specifications and actual app behavior, giving Tuul's development team a clear picture of what needed to be resolved before those gaps became failures for riders on the street.
IoT device testing covered battery endurance, network resilience, GPS accuracy, and interruption recovery, each validating a specific failure mode that would otherwise have surfaced in the deployed fleet rather than in a controlled testing environment. Because Tuul manufactures its own hardware, every quality failure was its own: there was no external supplier to absorb responsibility for defects.
The regression test suite, built from scratch, resolved the structural vulnerability that had accumulated in every prior release. With no prior regression coverage, each new release had carried unquantified risk to existing functionality. The suite gave Tuul a reliable mechanism for protecting software quality across all future development cycles.
TestDevLab continues to support Tuul on an ongoing basis, designing new tests and recommending improvements to both IoT devices and mobile applications as the product evolves. The engagement model is a remote team working directly with Tuul's development organization — a structure that enables continuous quality coverage without requiring Tuul to build and maintain an internal QA team.
Read the full Tuul case study for complete details on the testing scope, methodology, and outcomes.
FAQ
Most common questions
What types of testing does IoT QA require that standard software testing does not?
IoT QA must cover hardware-specific failure modes that have no direct software equivalent: battery performance and endurance testing, network connectivity and resilience testing, GPS and location accuracy validation, and interruption and recovery behavior testing. These must be conducted alongside standard application testing because users experience both surfaces as a single integrated service.
How do you build a regression test suite for an IoT product from scratch?
Start by documenting the existing functionality of both the mobile application and the hardware integration layer, then prioritize test cases by the severity of regression risk each covers. The goal in an early-stage regression suite is not exhaustive automation but durable coverage of the highest-risk paths, a baseline that can be extended systematically with each release cycle.
Why is GPS accuracy testing critical for shared mobility and fleet IoT applications?
Location data is a core operational dependency in fleet services: it underpins the user experience at unlock and return, and it drives fleet management decisions in real time. GPS inaccuracies in urban environments, caused by signal multipath interference and urban canyon effects, can compromise both the rider experience and operational efficiency, making location validation a non-negotiable testing domain.
How should IoT QA reporting be structured to drive development decisions?
Reporting should provide root cause analysis and issue replication data, not just failure counts. The reporting cadence should align with the development team's release cycle so that quality status is visible at the moments when prioritization decisions are actually made. A report that cannot be acted on directly is a report that will not be read.
Can a remote QA team effectively test custom IoT hardware?
Yes, provided the engagement model is structured for close collaboration with the product engineering team and hardware access is managed appropriately. TestDevLab's engagement with Tuul was conducted entirely as a remote team model, covering both mobile application and custom IoT device testing, with the testing infrastructure built from scratch and ongoing support continuing as the product evolves.
Is your IoT product going into production without a QA function behind it?
TestDevLab works with hardware and IoT companies to establish quality assurance from the ground up—mobile application testing, custom device validation, regression suite construction, and structured reporting included.





