Software testing in the right context

Software testing in the right context

Testing for software delivery

It is common knowledge that software testing began to boom during the year 2000 when all of a sudden, companies faced the threat of the Y2K bug and the introduction of the Euro. What fewer people know is that software testing has been around since the fifties of last century. Over those sixty years, the testing paradigm has clearly shifted a few times.

In the current general market, we observe a trend where testing is moving from just a phase in the project towards a constant verification of business needs versus product features during the complete Application Lifecycle.

Our vision on software testing

Software testing is more than discovering and debugging defects. For testing to be more efficient and effective nowadays, it needs to challenge misunderstandings and discover unknowns as early as possible, by integrating much more tightly within the team responsible for developing the software product. In addition, learning to know an application this way will improve the communication with the other stakeholders and result in better solutions.

Embracing this learning paradigm means embracing context-driven testing. Context-driven testers choose the right testing objectives, techniques, and deliverables (including test documentation) by looking first to the details of the specific situation, including the needs and  desires of the stakeholders who commissioned the testing. The essence of context-driven testing is project-appropriate application of skill and judgment.

Testing That Which Matters, When it Matters, in a Way it Matters

The context driven school has put forward seven basic principles:

  1. The value of any practice depends on its context.
  2. There are good practices in context, but there are no best practices.
  3. People, working together, are the most important part of any project’s context.
  4. Projects unfold over time in ways that are often not predictable.
  5. The product is a solution. If the problem is not solved, the product does not work.
  6. Good software testing is a challenging intellectual process.
  7. Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.

Ultimately, context-driven testing is about doing the best we can with what we get. Rather than trying to apply “best practices”, we accept that very different practices (even different definitions of common testing terms) will work best under different circumstances. This means that the goal is to test software, not to see that we have applied the methodology that is in place to the letter.

What can we do for you?

Why Realdolmen as your testing partner?

We aim to be your testing partner, rather than being your testing supplier. Together with the Realdolmen ALM team  (Application Lifecycle Management), we want to help you increase the efficiency and effectiveness when designing, developing, testing, releasing and maintaining solutions that solve the needs and problems of your business users/customers. Our primary interest is not to supply bodies for repetitive testing work, but to provide knowledgeable consultants that are efficient and effective in what they do. They have the right skillset and their knowledge also lies beyond the boundaries of software testing. They know how the different processes within the Application Lifecycle interact with one another and they have gained experience by working on large and/or complex projects.

Realdolmen your testing partner

Testing in the right context

By looking into the right context of the project, and by adapting our approach to this context, we make our testing much more efficient than when we would stick to one pre-defined approach that is supposedly valid for all projects. The context of the project will determine our approach and will differ from project to project. Our consultants are trained to cope with these differences. We also believe that by using exploratory testing techniques, combined with (automated) scripted testing (where most effective), we further optimize our testing activities.

 

Our Testing & Quality Assurance unit has very consciously been organized within the same division as the Application Lifecycle Management  (ALM) unit. We operate closely together and regularly contribute to each other’s successes. This cooperation encourages our testers to look beyond the boundaries of the testing domain, and have a better understanding of how the different people, processes and tools impact each other.

Test tools for software delivery

Test tools - just a tool in the tool box

Tools are only a means to a solution, they are never a solution in itself. This is why we look which tool fits best in which project context. We want to avoid you to waste effort and money by implementing tools and enforcing them to projects where they are used ineffectively.

We work with tools that support our philosophy. Tools that support exploratory test practices, and that reduce test automation maintenance by being more robust.

We prefer to implement tools that are integrated with the tools used by other teams in the Application Lifecycle (version management, configuration management, build management, work item tracking, planning, design…). This makes communication and reporting a lot more efficient.

Should you be using a tool that is not the right one for your context (or you do not have one yet), we can also help you identify which ones might be more appropriate and migrate your data where possible.
 

Efficiency and effectiveness to reduce costs

The vast experience with complex testing projects has taught our team flexibility, focusing on the context of the projects, and taking into account project and product risks. We are able to use exploratory testing and technical testing to help the project team improve product quality within time and budget. 

This not only makes our testing more efficient, it has also made it more effective. The reason we focus both on efficiency and effectiveness is because they complement each other. You can be very efficient in your testing, but if your testing does not contribute to the general project goal (delivering software that is a solution for one or multiple business problems), then your testing is not effective.

We believe that by applying the context-driven principles in our work, our approach will be more efficient and effective than a more traditional approach. Being more efficient and effective implies a greater return for the same (or even a lower) cost for testing and the project.

Efficiency and effectiveness to reduce costs
Related Case studies