There's a point in every fast-growing company's life where the QA team goes from being a quality safeguard to being the thing that slows everyone down.
As Pekka Pönkänen, QA Competence Lead at Wolt, describes on a recent episode of the Tech Effect podcast, he watched it happen in real time. When he joined Wolt in September 2020, the company had around 1,400 people. Today it has 15,000. The engineering team alone has grown to at least 750 engineers, with the broader product organization around 1,000 people.
As the company scaled through COVID-driven demand, the pattern was predictable: more engineers meant more features, more features meant more code, more code meant more things to test. And the QA team, no matter how skilled, couldn't test everything that was shipping. They were becoming the bottleneck, and the math wasn't going to fix itself. Hiring QA testers proportionally to match engineering growth wasn't sustainable.
"We realized that QA cannot test everything. There were too many tickets, PRs, hot fixes coming up. We were the bottleneck. So we destroyed that and changed the model." —Pekka Pönkänen, QA Competence Lead at Wolt
The solution wasn't more testers. It was a fundamentally different approach to who owns quality.
TL;DR
30-second summary
How did Wolt compress its mobile release cycle from five weeks to one week while scaling its engineering team to 750 people — and what can other technical teams learn from it?
According to Pekka Pönkänen, QA Competence Lead at Wolt, speaking on the Tech Effect podcast:
- Quality ownership must be distributed, not centralised. At scale, a centralised QA team testing everything becomes the bottleneck. Wolt shifted from a quality assurance model, where QA is the final gate, to a quality assistance model, where every engineer owns the quality of their work and QA professionals act as coaches and enablers.
- Release cycle compression happens in stages, not overnight. Wolt moved from an unstructured three-to-six-week cycle to three weeks, then two, then one, each step backed by infrastructure improvements, process refinements, and team alignment. Skipping the intermediate stages creates more problems than it solves.
- Communication is the single most important enabler of fast releases. With 750 engineers contributing to shared codebases, knowing what is changing, what is risky, and what is ready to ship requires constant, structured communication. Slack discipline, dedicated release channels, and clear escalation paths are not optional at this scale.
- QA should be present before a single line of code is written. The cheapest time to catch a problem is during product design and specification. Getting QA perspective involved at that stage prevents entire categories of bugs that are exponentially more expensive to find and fix in production.
- AI tools are changing the QA profession, and teams that experiment now will set the standard. AI agents running user flows and generating edge case scenarios will handle the repetitive, pattern-based work, freeing human testers for exploratory testing, real-world scenario design, and critical thinking. But AI-powered features also need to be tested, and QA engineers who can evaluate both the happy path and the failure modes of AI integrations will be in high demand.
Bottom line: According to Pekka Pönkänen on Tech Effect, Wolt's ability to ship weekly mobile releases across a 750-person engineering organisation came down to three things: distributing quality ownership across every team, investing in CI/CD infrastructure and platform engineering, and treating communication as a release-critical discipline. The QA team didn't shrink. It evolved from a bottleneck into a multiplier.
The mindset shift: From assurance to assistance
The traditional QA model, where testers review everything before it ships, works at a small scale. At Wolt's pace, it was unsustainable. So they moved from what Ponkanen calls an "assurance approach" to an "assistance approach."
In practical terms, this means:
Every engineer owns quality for their work.
If you're part of a feature team, you're responsible for the quality of what that team ships. QA isn't a separate gate you push your code through, it's a shared responsibility across the team.
QA engineers act as coaches, not gatekeepers.
Instead of testing every ticket, QA professionals help teams develop better testing practices, review testing strategies, assist with test automation, and provide guidance on critical projects. They bring their domain expertise, which, after years of working with the product, is substantial, to help teams ship well, rather than checking every line item.
Anyone can fix what they see.
The cultural element matters as much as the process. At Wolt, if you see something broken and you have the capability to fix it, you open a pull request. It's not "that's another team's problem." It's "we all care about the overall product."
This shift required trust. Engineers had to take genuine ownership of quality, not just nominal ownership. QA engineers had to let go of being the final checkpoint and instead focus on enabling others. And leadership had to accept that a distributed quality model means some things will be caught differently than before.
The result: the QA team stayed lean even as the company grew 10x, because quality was no longer concentrated in a single function. It was distributed across the entire engineering organization.
How Wolt compressed release cycles without cutting corners
Wolt's mobile release cycle didn't shrink overnight. It happened in stages, each one requiring process changes, tooling improvements, and, above all, better communication.
- The starting point (pre-2021): Releases happened when someone decided it was time. There was no structured cadence. The cycle could take three to six weeks depending on what was in the pipeline.
- First improvement (2021): After shifting to the assistance model and implementing a release plan for the first time, the team brought the cycle down to three weeks. A big win at the time.
- Second stage: As scaling continued and more teams needed to ship faster, they compressed further to two weeks. This required improved CI/CD infrastructure, better automated testing, and more disciplined release management.
- Current state: Weekly releases for native mobile apps. This is especially notable because mobile releases carry an extra constraint that web doesn't—app store approval. Every release needs sign-off from Apple and Google before it reaches users, which adds unpredictable time to the process.
The enablers Ponkanen highlights:
Platform engineering
Investing in the build and CI infrastructure made faster releases technically possible. Without reliable automated builds, consistent test environments, and fast feedback loops, weekly releases would have been reckless.
Communication
Ponkanen returns to this point repeatedly, calling it his "golden rule." With 750 engineers contributing to shared codebases, knowing what's changing, what's risky, and what's ready to ship requires constant, clear communication. "If the communication fails, it's a mess," he says. As teams scale beyond the point where everyone is in the same room, Slack discipline, structured release channels, and clear escalation paths become the difference between smooth releases and chaos.
A dedicated release team
Forming a small cross-functional group, QAs, leads, engineers, focused specifically on the release process gave the effort focus and accountability. They identified pain points, proposed improvements, and communicated changes to the broader organization.
Why QA as an afterthought costs more than you think
Ponkanen acknowledges reality: if you're an early-stage startup hitting the market for the first time, you might need to move fast and accept some rough edges. "You can wing it if you're early stage," he says. "Do some cowboy action. Your user base is still small."
But the moment you start scaling—when users depend on your product, when competitors are ready to absorb your customers, when your data obligations get serious—treating QA as an afterthought becomes expensive in multiple ways:
- Lost customers who don't come back. Modern users have zero tolerance for buggy apps. "Nowadays the user is like, 'it didn't work, I'm never going to come back,'" Ponkanen says. And they won't just leave quietly, they'll tell others. A good app gets recommended to friends and family. A bad one gets warned against.
- Delayed launches. When testing is saved for the end and significant issues are found, the entire release gets pushed back. The marketing is already booked, the timeline was committed to stakeholders, and now QA is the team delivering bad news. This creates exactly the kind of pressure and stress that leads to cutting corners, which leads to more problems.
- Exponentially higher fix costs. A bug caught during a design review costs almost nothing to fix. The same bug caught in production—after it's been released, after users have encountered it, after customer support has fielded complaints—costs orders of magnitude more. Getting QA involved from the first line of code (or even before, during product design) compresses the cost of quality dramatically.
The practical rule: QA should be present when there are zero lines of code. Namely, during product design and specification. Having someone who thinks about edge cases, failure modes, and user behavior reviewing the plan before development starts prevents entire categories of problems that are expensive to fix later.
What customer feedback actually looks like at scale
For a consumer app like Wolt, feedback comes from everywhere, and the companies that use it well treat every source as a signal worth processing.
Ponkanen lists the sources that matter at Wolt:
- Customer support. Wolt's support team, which Ponkanen describes as "the best in the world", generates detailed reports that flow directly into technical channels. When users report issues through chat, those reports become actionable bug tickets.
- Friends and family. "Someone's grandma can't use it? There's something really wrong." Informal feedback from people in your life who use the product is surprisingly valuable because they'll tell you things that strangers won't bother reporting.
- App store reviews. Both positive and negative reviews contain usable signals about what's working and what isn't.
- Community channels. Wolt uses a shared Slack channel where anyone in the company can report issues they've found. This creates a low-friction path for bugs and UX problems to surface quickly from people who use the product daily.
- Reddit and forums. Ponkanen mentions personally debugging bug reports found on Reddit. Public forums are an unfiltered source of user experience data.
The pattern: don't limit your feedback pipeline to formal channels. Users express frustration (and delight) in places you don't control. The teams that monitor broadly and respond quickly build the reputation that keeps customers loyal.
What QA engineers need to prepare for next
The QA profession is changing, and Ponkanen's advice is direct.
The technical bar is rising.
Cloud infrastructure, better emulators, faster tooling —these make the job faster but also raise expectations. QA engineers are expected to be more technical than they were a decade ago. The shift from "quality assurance" to "quality engineering" isn't just terminology, it reflects a real expansion in the skills required.
AI tools are here, go try them.
Ponkanen expects to see AI agents "roaming around applications" as testing companions, discovering edge cases, generating test cases, running user flows. This won't replace human QA, but it will change the balance. The repetitive, pattern-based testing that AI handles well frees up human testers for the work that requires creativity, empathy, and domain knowledge, like exploratory testing, real-world scenario design, and critical thinking about what could go wrong.
His advice: if you haven't tried the new AI testing tools yet, start now. Ask your company for licenses. Experiment. Understand what these tools can and can't do. Because if you don't, you'll fall behind teams that are already using them.
But apply critical thinking.
AI tools integrated into your product also need testing. If your product now depends on an AI API, you need to understand what it does, how it can fail, and what safeguards are in place. "Maybe horrible things can happen if you don't test your new AI tools," Ponkanen warns. QA engineers who can evaluate AI-powered features, testing both the happy path and the failure modes, will be in high demand.
AI features fail in ways traditional testing doesn't catch.
Key takeaways for technical teams
Quality is everyone's job, not just QA's. At scale, a centralized QA team testing everything is a bottleneck. Shift to a model where every engineer owns the quality of their work, and QA professionals act as coaches and enablers rather than gatekeepers.
Communication is the golden rule. More than any tool, framework, or process change, clear and consistent communication is what makes fast release cycles possible. As your team grows, invest in structured communication,through release channels, escalation paths, and shared reporting, before the lack of it becomes a crisis.
Start QA early, at zero lines of code. The cheapest time to catch a problem is during design. Getting QA perspective involved in product specification and architecture review prevents entire categories of bugs that are expensive to find and fix later.
Speed up your release cycle incrementally. Wolt didn't jump from five weeks to one week overnight. They went from unstructured to three weeks, then two, then one. Each step backed by infrastructure improvements, process refinements, and team alignment. Rushing the compression without the supporting changes creates more problems than it solves.
Invest in platform engineering and test automation. Weekly mobile releases are only possible when your CI/CD pipeline is reliable, your automated tests are trustworthy, and your build infrastructure can handle the pace. These investments pay for themselves in release speed and team sanity.
Go try the new tools. Whether it's AI testing agents, improved emulators, or cloud-based testing environments, the tooling available to QA engineers today is dramatically better than it was even a few years ago. The technical bar is rising - and the teams that experiment early will set the standard.
→ Listen to the full conversation with Pekka Ponkanen on the Tech Effect podcast
FAQ
Most common questions
How did Wolt reduce its mobile release cycle from five weeks to one week?
The compression happened in stages rather than all at once. Wolt first established a structured release cadence and shifted quality ownership from a centralised QA team to individual feature teams, bringing the cycle down to three weeks. Further investment in CI/CD infrastructure, automated testing, and release management brought it to two weeks, then eventually to weekly releases. Each stage required both technical improvements and organisational alignment, particularly around communication practices across a rapidly growing engineering team.
What is the difference between a quality assurance model and a quality assistance model?
In a traditional quality assurance model, a dedicated QA team reviews and approves work before it ships, acting as a final gate in the release process. At scale, this becomes a bottleneck. A quality assistance model distributes ownership instead: every engineer is responsible for the quality of their own work, and QA professionals act as coaches, helping teams develop better testing practices, reviewing testing strategies, and providing guidance on high-risk projects. The QA function shifts from checkpoint to enabler.
Why is communication described as the golden rule of fast release cycles?
With hundreds of engineers contributing to shared codebases simultaneously, release failures are rarely caused by individual technical mistakes. They are caused by coordination breakdowns. Not knowing what a neighbouring team has changed, what is risky in the current build, or who to escalate to when something goes wrong creates the conditions for chaotic releases. Structured communication through dedicated release channels, clear escalation paths, and consistent status updates is what makes it possible to ship quickly without surprises.
When should QA be involved in the development process?
According to Pekka Pönkänen, QA should be present when there are zero lines of code, meaning during product design and specification, before development begins. A bug identified during a design review costs almost nothing to fix. The same bug found in production, after users have encountered it and customer support has fielded complaints, costs orders of magnitude more. QA involvement at the specification stage prevents entire categories of problems rather than catching them after they have been built in.
How is AI changing the role of QA engineers?
AI testing tools are beginning to handle the repetitive, pattern-based work that has traditionally occupied a significant portion of QA time, like running user flows, generating edge case scenarios, and identifying regression failures. This frees human testers to focus on work that requires creativity and domain knowledge, such as exploratory testing and real-world scenario design. However, AI-powered features in products also introduce new testing requirements: QA engineers who can evaluate how AI integrations fail, not just how they succeed, will be increasingly valuable as more products rely on AI APIs and models.
More engineers, more features, more risk. Is your QA model scaling with you?
As release cycles get faster and AI features get more complex, the failure modes that matter most are the ones standard testing doesn't catch. We help scaling engineering teams build QA processes that keep pace, without becoming the bottleneck.





