Join 34,000+ subscribers and receive articles from our
blog about software quality, testing, QA and security.

Test cases maintenance


#1

Hello all.

[background]
We are currently running a project where at this moment around 1.000 test cases have been created.
Our project involves different areas of an ERP (Sales, Accounting, Purchase, HelpDesk) and so on.
Every month we have new requirements for the project, meaning that every month we create new test cases or update existing ones in order to cover the new requirements.

[problem]
Due to the complexity and size of the project, we have the feeling that we are losing the control about deciding whether create a new test case or update an existing one.

Sometimes a new requirement impacts one step used in 30 or 40 new test cases and actually we don’t have the control on how to identify all these test cases affected.

Another problem is when bugs come, we are not sure if we should create new test cases for each bug because normally it is related to one specific step in a big workflow; or update an existing one.

[solutions]
We have been discussing 2 different approaches for the problem:

1st. Create a new test case for every new requirement

  • In this first approach we would ignore any test case created in the past.
  • We would create new test cases for every new ticket in Jira (Bug, improvement, task), sometimes copying an existing test case previously created and editing some steps, but NOT changing the old / existing test case.
  • Create a Test Plan for every sprint and select only the test cases created in this sprint to be run.

Advantages:

  • We would make sure that all new test cases are matching the new requirements since we would create them one by one
  • The maintenance would be easier since we would not need to search and look for existing test cases affected by this new requirement

Disadvantages:

  • We would generate a huge number of test cases every sprint, and the old ones would be useless since they would be obsolet. (just imagine a test case for login, every new requirement for password, e-mail, user name, user id … would generate a new test case: login)
  • We run new test cases but we are not sure about what we already implemented still work as expected because we ignore old test cases, while new test cases might not cover everything in our big system.

2nd. Update test cases or create new test cases for every new requirement
Advantages:

  • We would make sure that all test cases in the repository are up to date accordingly to the latest requirements.

Disadvantages:

  • The maintenance is hard since we would need to search and look for existing test cases affected by new requirements.

Further Information
Other than that, we are considering use Test Cases as training documentation for newcomers to our project, or even for our clients as a software guideline. We think that when people perform test cases, they will understand the system. Is it possible? Or test cases are just dedicated for QA team to do software testing? What do you think?


#2

To answer the basic question, checklists…
Create a test case where each test step has it’s own Pass/Fail status.
Compress the basic tests into a set of tests that are to be run frequently, don’t change that often, identify them as regression and reuse them from project/test plan to project.
Mark the cross reference against the original stories in Jira and then add new ones that they cover… incremental changes in a defined package.

After that it’s a case of adding new tests as needed.

One key aspect of managing your test cases is to structure them in sections that can be mapped to your product. Having a section called “Regression” and popping the checklists in there (with the product structure recreated inside) makes keeping track of them that much easier.

The next bit might be a wee bit controversial for some:

Test Cases are not training docs, marketing info, user docs or design docs.
If people want to items, they must invest in them separately.
If that is the objective of your managers/org, they are not considering that you are first and foremost testers. If people want to use them for other purposes, they must decide to re-use exactly and take the hit or re-purpose the information for themselves. Do not get caught up doing a tech writer, designer, user support, field support job.

My €0.02c :slight_smile:


#3

@ivormac thanks for your good ideas!

bgomes and I are coworkers. We’ve applied most of them. To manage incremental changes, I create a Test Plan for each release, and ‘Complete’ it in order to ‘freeze’ the content and track history.

If I understood correctly, you suggest us to update old test cases for changes in an existing workflow (‘incremental changes’), and to create new ones for new functionalities / workflows, right?

One thing I didn’t get is the use of ‘regression’ and ‘checklists’. Could you please give us a simple example? It would help a lot. :slight_smile: :slight_smile:


#4

image

This is just a simple representation of a product we develop and maintain.
This structure sits under both the “Product” and “Regression” main sections.

Regression test packs look like this:

They are of a type “Regression” using a “Checklist” template.

When you include this in a Test Run within a Test Plan and execute it, you are recording the test status using a simpler format that shouldn’t require much effort to maintain.

Adding a test result you get:

Which looks like this when you complete the process:

Hope this helps…

Ivor