Automated Application Testing and Sensitivity Analysis

In the recent weeks, I have been thinking about testing our applications, like our popular Profit Network, or Profit Vehicle Planner.  When we test, we run data sets that are designed to stress the system in different ways, to ensure that all the important paths through the code are working properly.  When we test, our applications get better and better. There are many good reasons to test, most importantly, is to know that an improvement in one part of the code does not break a feature in a different part of the code.

ApplicationHarness

I have been thinking about how we could test our code a bit more, and the means by which we could do that. I have been reading about automated testing, and its benefits. They are many, but the upshot is that if the testing is automated, you will likely test more often, and that is a good thing.  To automate application testing requires the ability to churn out runs with nobody watching. And to do that, the application needs to be able to be kicked off and run in a way that there are no buttons or dialog boxes that must be manually clicked to continue. There can be no settings that must be manually set, or information reviewed to decide what to do next. In addition, the application must then save the results somewhere, either in the instance of the application, or to a log file, or to a database of some sort. Then finally, to really be testing, the results must be compared to the expected results to determine the pass/fail state of the test. This requires having a set of expected results for every test data set.

 

In looking at this process above, I see numerous similarities to the process used to run a sensitivity analysis, in that many runs are typically run, (so automation is a natural help) and the results need to be recorded. Sensitivity Analysis is a typical process for user of our Profit Network tool, and out Profit Planner and Profit Scheduler tool.   An additional step in sensitivity analysis however, is that you ApplicationHarness1may desire to change the input data in a systematic way (say Demand + 5%, and Demand -5%), and to the extent that it is indeed systematic, this too could be folded into the automation. The results analysis is different too, in that here you would like to look across the final sets of results at the differences, while in testing you just compare one set of test results to its expected results.  I can foresee difficulty in automating the data changes, since each type of data may need to be changed in a very specific way.  Never-the-less, even if the data changes are manual, they could be prepared ahead of the run, and the runs themselves could be grouped in a batch run to generate the results needed for a sensitivity analysis.

Constructing a harness that lashes up to an application where you can define the number of runs to be made, the setting for that run, the different data sets to be used, and the output location for results to be analyzed, would be useful not only for testing, but for the type of sensitivity analysis we do a lot of here at Profit Point.

I am going to encourage our developers to investigate this type of a system harness to be able to talk to and control our applications to be able to run them automatically, and have their results automatically stored in a data store for either test or sensitivity analysis.

Jim Piermarini  |  CEO Profit Point Inc.

 

About Jim Piermarini

Jim has extensive experience in the chemical industry, including ten years working in a plant and twenty additional years working with businesses to improve their supply chains.

Contact Us

Have a Question? Get in touch. Profit Point is here to answer any questions you may have, help you with information, and create an effective solution for all of your supply chain needs.
Contact us here

Search