The cooperation problem that most teams never directly address
Ask most engineering leaders what they are trying to build and they will describe something like: a high-performance team with strong quality standards, clear ownership, and a culture of continuous improvement. Ask them how they incentivize their teams and the answer is usually some version of: based on what each person delivers.
The gap between those two answers is where most team dysfunction originates.
In a recent episode of the TechEffect podcast, we sat down with Nikolaj Tolkačiov, engineering manager at Nord Security and a leading voice in the Lithuanian software testing community through his work with the LSTQB. With more than a decade of experience across QA engineering, large testing organization leadership, and a CTO role that didn't work out, a chapter he describes with unusual candor, Tolkačiov has built a framework for thinking about team dynamics that most engineering leaders never articulate directly. What he shared on the episode challenges one of the most common assumptions in software development: that a team of high performers will naturally become a high-performing team.
His presentation at Tescon 2025 distilled a conclusion that sounds provocative until the logic becomes clear: people are fundamentally self-interested, cooperation does not happen by default, and leaders who assume otherwise will be continually confused by why their teams do not share knowledge, help each other when stuck, or invest in practices that benefit the team but cost the individual something.
Understanding this is not pessimistic. It is the starting point for building something that actually works.
TL;DR
30-second summary
What does it actually take to build a team where people make each other better?
According to Nikolaj Tolkačiov, Engineering Manager at Nord Security and Executive Board Member at the LSTQB, speaking on the Tech Effect podcast:
- People default to self-interest, not cooperation, and pretending otherwise is the most common mistake leaders make when building teams. Incentive structures that reward individual output over collective knowledge-sharing produce teams that hoard information and cap their own potential at the individual level.
- Trust is the only foundation that makes everything else in a team work. Without it, data-informed decisions, continuous learning, and shift-left quality practices are all surface-level—they will not be accepted, and the effort put into them will be wasted.
- The "fix everything" instinct that makes a great QA engineer can actively harm an engineering manager. Not every bug should be fixed. Every decision about whether to release with a known issue is a resource allocation decision, and treating quality as binary rather than contextual is one of the most common ways technical leaders destroy team productivity.
- Juniors who over-rely on AI before building foundational skills will be unable to critically evaluate AI output—which means they will ship whatever the AI produces, including its hallucinations, without knowing the difference.
- Zero-knowledge architecture creates a testing environment where the most familiar QA methods simply do not work. Testing at NordPass requires cryptography knowledge and custom tooling that most QA engineers have never encountered—and the constraint forces a level of security expertise that most teams never develop.
Bottom line: According to Nikolaj Tolkačiov on Tech Effect, engineering teams that sustain quality over time are built on trust and structured cooperation rather than competitive individual performance. Leaders who understand this build environments where knowledge multiplies across the team rather than accumulating with a few individuals who have every reason to keep it to themselves.
Why competitive incentives produce knowledge hoarding
The prisoner's dilemma in game theory illustrates the core problem precisely. When two players make a single decision in isolation, defection—acting in self-interest—produces a better individual outcome than cooperation, regardless of what the other player does. The rational choice is to look out for yourself.
The situation changes completely when the game repeats. Across multiple rounds, cooperation consistently outperforms defection because the accumulated benefits of mutual assistance exceed what any individual can achieve alone. The math is not subtle: iterative cooperation does not just produce marginally better outcomes, it unlocks a completely different ceiling.
Tolkačiov applies this directly to engineering teams:
"If I encourage the teams to be focusing on their deliveries, I will be measuring them like how much value they created, like outcomes of their effort, of their hours they put into the task—versus if I will be encouraging cooperation. Everyone knows that you will get more kudos, you will get more attention if I will be just focusing on my delivery and delivering the maximum I can. But that creates problems in teams because no one is helping each other."
The result of purely delivery-focused incentives is a team of individuals working in parallel rather than a team. Each person optimizes their own output. No one shares what they know because sharing reduces their advantage. When someone is stuck, they do not ask for help because asking is an admission of weakness in a competitive environment. Velocity looks fine in the short term. The compounding cost shows up later—in knowledge silos, in burnout among people who cannot get help when they need it, in the imposter syndrome that quietly builds when people feel they have to pretend to know things they do not.
The alternative is not idealistic. It is structural. Leaders who want cooperative teams need to make cooperation the thing that gets rewarded, not just in theory, but in how promotions, recognition, and growth frameworks are actually designed.
Knowledge sharing as a prerequisite for seniority
The practical mechanism Tolkačiov points to is one that many mature organizations have already built into their competency frameworks, even if the game theory rationale is rarely stated explicitly: knowledge sharing as a requirement for advancement.
"If you're senior, you need to share the knowledge. If you don't, probably you won't be promoted. There's some rail guards in there in your growth that you need to start sharing. If you have knowledge and you can do something and only you can do that for an organization, that might not be very good. But if you can share that knowledge and then other five people can do that same thing, that's like you multiply your knowledge, your capabilities by five."
The feedback loop this creates is not just altruistic. Sharing knowledge exposes the sharer to feedback they would not otherwise receive. Other people find the gaps, the edge cases, the assumptions that seemed obvious but were not. The expert improves because the knowledge is now being applied by people who approach it differently. The team improves because the constraint that only one person could do a critical thing has been removed.
This is the long game that competitive incentives cannot produce, because they give individuals every reason to prevent it.
See how this works in practice.
Our QA consulting and QA management services help engineering teams build testing strategies that scale—from establishing quality culture to AI-augmented test automation.
The hardest thing to unlearn from a QA career
Tolkačiov's path to engineering management went through a CTO role that did not work out, a chapter he describes with unusual candor. He saw the red flags. He took the role anyway, driven by ambition and a belief that he could handle any challenge. He was fired. The recovery took time and involved sustained self-questioning about what he had missed and what he could have done differently.
The lesson was not simply about pacing a career. It was about the difference between wanting a role and being ready for it. The grunt work at each level exists for a reason. It builds the contextual judgment that cannot be imported from a more senior position.
The same principle shows up in what he describes as the hardest mindset shift he had to make as an engineering manager, and the one that most directly contradicts the instincts of an experienced QA engineer.
"Ironically, when you are a QA, you want to fix all the bugs. You have quality issues and you're not happy with that. And then when you are more responsible for the delivery and the focus of the team, you start to understand that not everything needs to be fixed. It costs money, it costs time, and if we are fixing this bug for a week, that means we are not working on another feature which might be required by the user—more useful."
Every bug fix is a resource allocation decision made at the expense of something else. A feature that helps 95% of users and creates a workaround problem for 5% may still be worth releasing early, with support prepared to handle the minority case, rather than delaying the value for the majority. This is not a compromise of quality standards—it is what quality actually means at the system level rather than the component level.
The example he gives from NordPass is instructive. Namely, a pre-release check revealed that special characters in address fields were not handled correctly. The issue affected a non-trivial range of users. The team released anyway, prepared customer support with a response protocol, fixed the bug in 24 hours, and received exactly one complaint. The decision was correct. A purist QA instinct would have delayed the release, a systems-level quality judgment released it with a mitigation plan.
Delivering difficult feedback without destroying the relationship
Quality only improves if issues are surfaced clearly and acted on, which means QA engineers have to deliver unwelcome news to developers who are often context-switching away from something else and are not necessarily delighted by the interruption.
Tolkačiov is precise about the friction this creates, and the source of it:
"When we find an issue, it's the value we create as QA to the team—that there is a problem and how to fix it, how to reproduce it. And if we're too excited about it, we come to the developers very happy, like, 'Hey, I found an issue. Hooray.' And it might be a very different level of emotions and energy. And that mismatch might create even more friction."
The advice he draws from Kim Scott's Radical Candor framework—challenge directly, care personally—translates practically into embedding feedback within a relationship where the developer knows the QA engineer is trying to help them succeed, not catch them failing. Offering to help reproduce the bug, flagging it in a way that treats it as a shared problem rather than an accusation, and calibrating the energy of the delivery to the recipient's current state are not soft skills decorating the technical work. They are the mechanism by which the technical work actually gets done.
The longer-term version of this is QA engineers functioning as coaches, teaching developers basic testing techniques so that issues are caught earlier, closer to the point of creation, and with less emotional charge because the developer was part of finding them.
Security-first QA: What testing zero-knowledge architecture actually requires
Most QA environments allow testers to manipulate data directly, to set up specific conditions, seed the database with edge cases, and verify that the system responds correctly. NordPass's zero-knowledge architecture removes that option entirely.
"We're using zero-knowledge architecture. That means that we don't know what the data in the database is. It's just gibberish to us. We cannot manipulate it. We cannot change it. We cannot understand how it works without the part of UI."
This is not a minor constraint. It means that the standard toolkit for database-level testing simply does not apply. QA engineers at NordPass need a working understanding of cryptography—how encryption and decryption work, what zero-knowledge proofs do and do not guarantee, how to reason about data integrity when the data itself is opaque. The company has built custom tooling to work within this constraint, but using that tooling effectively requires knowledge that most QA engineers entering the field have never needed to develop.
The mandatory security review process that governs every feature release at Nord Security reinforces this further. Every feature must pass security review, penetration testing, and threat modeling before it ships. This is not a checkbox. It is a structured process that routinely identifies attack vectors that functional testing would miss entirely, and the results feed back into QA processes as learning for future development cycles.
The constraint that makes testing harder also produces better engineers. People who have developed security testing skills at Nord Security carry those skills everywhere, and those skills are among the most valued in the industry.
AI and the junior engineer trap
On AI's role in QA, Tolkačiov's position is precise in a way that most commentary on the subject is not.
AI tools are useful. They will become more useful. QA engineers who do not understand how to work with them will be left behind. All of that is true and not particularly controversial.
The dangerous part is the shortcut it appears to offer to junior engineers—the ability to generate test cases, write automation scripts, and produce outputs that look like senior work before the underlying competence that makes senior work reliable has been built.
"The more junior you are, the less AI you should use in your workflows. This will allow you to get the skills you need in order to assess if AI is correct and to critically judge the output of AI and then can you trust it or not. And then you get to the senior level when you already have really good knowledge and then you can utilize AI even more."
The problem AI creates for juniors is not that it makes them lazy. It is that it removes the feedback loops that build judgment. A junior engineer who uses AI to generate test cases without knowing what good test cases look like cannot tell whether what AI produced is good or not. They ship the AI's output with confidence they have not earned. When the AI hallucinates—and it does—they do not catch it because they lack the baseline to compare it against.
The hard-earned skills are not just a prerequisite for doing senior work. They are a prerequisite for knowing whether the AI's output is doing senior work or producing convincing nonsense.
Staying on the edge
The through line in Tolkačiov's career—from the video game curiosity that first drew him to programming, through QA engineering, leadership, the CTO failure, and now engineering management at a security company—is a consistent orientation toward what he calls staying on the edge.
"The AI models we use today will be outdated in two months. It's that fast. But you get more tool and framework agnostic. You learn that nothing is going to stay there forever and you need to get this knowledge and craftsmanship not related to the tooling itself, more about what needs to be done. If the tool helps me, that's fine, but it's just a tool. If I still know how to get it even without a tool, that will be helpful."
The edge is the zone between what is already mastered, comfortable, not stimulating, and what is genuinely beyond current capability, overwhelming, not productive. Work at the edge requires the competences you already have while presenting problems you have not yet solved. It is where learning is fastest and where the career compounds.
For QA engineers in an AI-saturated environment, staying on the edge means developing the skills that make AI output evaluable, like deep testing craft, security knowledge, understanding of the systems being tested, and the professional community connections that surface knowledge faster than any individual can accumulate it alone.
Essentially, engineering teams that sustain quality over time are not built on competitive individual performance. They are built on trust, structured knowledge sharing, and incentive systems that make cooperation the rational choice. The leaders who understand this create environments where every person's knowledge multiplies across the team, and where the constraints that make the work harder—zero-knowledge architecture, security review gates, the discipline of not relying on AI before building foundational skills—produce engineers who are significantly more capable for having worked through them.
→ Listen to the full conversation with Nikolaj Tolkačiov on the Tech Effect podcast
Key takeaways
People who default to self-interest, not cooperation, and incentive structures that reward individual delivery over knowledge sharing will produce exactly the team dysfunction they were trying to avoid. Cooperation is not a cultural aspiration, it is a design problem that requires deliberate structural choices.
Trust is the foundational element of team culture. Not data-informed decisions, not continuous learning, not shift-left quality practices. Without trust, none of the others take hold. Build trust first and the rest becomes possible.
Not every bug should be fixed. Every release decision involving a known issue is a resource allocation decision. Engineering managers who retain the QA instinct to fix everything will consistently deprioritize more valuable work in favor of lower-impact quality improvements that could wait or could be mitigated at lower cost.
Juniors who over-rely on AI before building foundational skills cannot evaluate AI output critically, and will ship hallucinations with confidence. The skills that make someone capable of using AI effectively are the same skills that AI appears to make unnecessary. There is no shortcut through this.
Zero-knowledge architecture creates a testing environment where standard QA methods do not apply. Testing at NordPass requires cryptography knowledge, custom tooling, and a security mindset that most QA engineers need to actively develop. The constraint is also an accelerator: people who work through it emerge significantly more capable.
Knowledge sharing is how individual competence becomes team capability. If only one person can do a critical thing, the organization is fragile. If five people can do it because it was taught and documented, the team has multiplied its capacity without adding headcount—and the person who shared the knowledge usually improves through the feedback they receive.
FAQ
Most common questions
Why do competitive incentives produce knowledge hoarding in engineering teams?
When teams are measured on individual delivery, each person has a rational incentive to protect what they know because sharing reduces their advantage in a competitive environment. People stop asking for help because it signals weakness. They stop sharing what they've learned because it levels the playing field. Velocity metrics look fine in the short term while knowledge silos, burnout, and imposter syndrome compound underneath.
How does game theory apply to engineering team management?
The prisoner's dilemma demonstrates that in a single interaction, self-interest produces a better individual outcome than cooperation, so the rational choice is to defect. But in iterative environments, cooperation consistently outperforms defection because the accumulated benefits of mutual assistance exceed what any individual can achieve alone. Engineering teams are iterative environments. Leaders who design incentive systems without accounting for this will get the single-round outcome, individuals optimizing in parallel, rather than the compounding outcome that cooperation produces.
What is the practical mechanism for building cooperative engineering teams?
Making knowledge sharing a structural requirement for advancement, not just a cultural aspiration. When promotion to senior levels requires demonstrable knowledge transfer, individuals have a direct incentive to multiply their capabilities across the team rather than protect them. The sharer improves through exposure to feedback and different approaches. The team removes single points of failure. The compounding effect of distributed knowledge replaces the ceiling of individual expertise.
Why should junior engineers limit their use of AI tools?
Because AI tools remove the feedback loops that build foundational judgment. A junior engineer who uses AI to generate test cases without knowing what good test cases look like cannot evaluate whether the output is correct, and when AI hallucinates, they won't catch it. The hard-earned skills built through doing the work manually are not just a prerequisite for senior performance. They are a prerequisite for knowing whether AI output is reliable or convincingly wrong.
What does quality decision-making actually look like at the management level?
It means accepting that not everything needs to be fixed, and that every bug fix is a resource allocation decision made at the expense of something else. The hardest mindset shift for experienced QA engineers moving into management is recognizing that releasing with a mitigation plan—support prepared, fix scheduled—can be the correct quality decision. Delaying value for the majority to protect the minority from a workaround is not always the right trade. Quality at the system level is different from quality at the component level.
Is your engineering team set up to make each other better—or just to perform individually?
At TestDevLab, we help engineering and QA teams build the quality culture, knowledge sharing practices, and testing strategies that compound over time. Whether you're establishing quality frameworks from scratch or scaling what's already working, let's talk about what your team actually needs.





