Join 34,000+ subscribers and receive articles from our blog about software quality, testing, QA and security.
 

Manual Data Driven Testing


#1

I am struggling in putting together the test run/plan that will work for data driven testing.

Each day/batch the data can move to a different scenario, etc. where we need to test certain features and functionality.
Let’s say I need to perform different functions or validate info on a given day on the iOS, Android, Web. If i set milestone for the releases…Let’s say iOS Build 1, Android Build 2, and Web v 2.5 - what is the best way to run certain the test needed across all platforms on a given cycle.

I have tried using a combo of test plan and and test run but to no avail.
Note: My test suites are organized by function. as each platform iOS, Android, Web share a majority of the same feature sets. My first plan was to utilize the configuration option in the test plan for the specific platform testing.

The problem is i don’t see the option to do manage the data being tested and the test case i want run for a specific data point/account on a specific configuration on a given day.

Day 1 - run x# of tests, run certain test on specific platform
Day 2 - same thing

Eventually these will be automated but I am not in a position yet to automate these scenarios


#2

Hello,

Thanks for your posting! I would recommend using configurations in this case and you can easily customize the case selection per configuration:

I hope this helps!

Cheers,
Tobias


#3

Thank you but this doesn’t really answer the question for daily cycles.

I only see the customizations on test plans. Is that correct?

NOTE:I have multiple customers in which we support for the same/similar product but possible on different versions

Customer 1 / Day 1 Batch cycle.

  • test case 1 / feature 1 run on configuration x
  • Test 2 /feature 2 on configuration y
  • Test case 3 / feature 1 configuration y

Customer 1 / Day 2 cycle

  • x number of test cases

In the end I need to be able to report on the feature, configuration, and customer. The cycle is only important to get the data prepared.

Thanks.


#4

Hi there,

We would usually recommend organizing test cases by functional areas, which appears you are already using. You can then start test runs/plans for different releases/iterations and filter the test cases for each test plan or configuration for different features/customers as needed. One thing that might be helpful is to tag test cases with customer/project details. That is, you could add a custom multi select field for test cases and tag cases accordingly so you can select the test cases for different runs/plans based on the customer release or feature you are testing. This approach usually works best if you want to start many different plans/runs for different projects or customer combinations regularly. Configurations can still be used to easily create runs for different platforms/systems then.