Testing your Test Software: Regression Testing

Testing your Test Software: Regression Testing

I?ve worked on many different test programs and with many different companies in the 16+ years I?ve been doing test engineering consulting and during that time I have been surprised at the lack of software quality measures in the semiconductor test development process.? Test programs can be complex software projects, often times with multiple developers, and given the critical nature and focus on shipping ?good parts? I?m surprised that more companies haven?t implemented more of the standard procedures used in the software development community.? And due to the nature of the IC market, programs are often considered a ?work in progress? where more and more tests and patterns are added as a device marches from customer samples towards mass production.? This incremental development process can introduce several other ?gotchas? that are well understood by software developers. Who hasn?t released a test program to production only to find out the most recent changes broke another portion of the program?? The problem of ?fix one thing, break another? is a common test engineering problem with a simple and often overlooked solution ? regression testing.

Regression testing is used in the software development community to verify changes or corrections made in the past are still behaving as expected and that new edits do not affect them. ??Software developers create test suites that target specific operations and then run the tests each time a new version of the program is to be released.? The number of tests in the regression suite grows over time as new situations are encountered and deemed important to ensure quality.

Regression testing in the software development world is a little easier primarily because the only thing they?re testing is the code they?ve written (and maybe the operating system it runs on).? The inputs required to get the code to target the test of interest depend on the application but generally it?s some kind of configuration file, test code, or gui tester (i.e. more code).? The developer runs the application, sets the proper inputs, then saves the resulting output for comparison to what is expected (the baseline).? If it matches the baseline, then the test passes, otherwise the regression test fails.

The test engineer has the additional variables of the device-under-test (DUT), device interface board (DIB or loadboard), and the tester.? Each test in a regression suite may require a specific set of inputs, loadboard, tester, and/or device to target a specific functionality.? For example, say you find a device with a gross supply-to-ground short that results in a run-time error.? The point of the test program should be to identify the short as soon as possible, bin it as a failure, and continue to the next device without a run-time error (which generally shuts down the test cell and can cause a host of production problems).? Once a correction has been made to the program, the device could be saved and used as an input to a regression test that targets and verifies the supply-to-ground test fix is in and working.

On the positive side, the test engineer has a great amount of output data to use as a comparison point.? Test datalogs contain an enormous amount of information that can be used to identify a change has occurred.? The enormous amount of information can also make the actual comparison process daunting without the use of automation.? Another wrinkle is the fact that a test value can vary from run to run, even for the same device.? This generally limits the ability of doing a simple ?diff? between two datalogs.? When you have hundreds of tests that behave this way, a visual inspection of the before-and-after datalogs is not practical so a script or software tool is likely needed to automate the comparison process.

So how?s it done?? Here?s my approach to a simple regression testing technique for test programs:

  1. Identify the inputs required for the specific regression test. This may be a specific loadboard, device, tester, flow control inputs, specifications, etc.? Many times, any device will do but it depends on what you?re targeting. Generally, it?s a good idea to document what it takes for each test in the regression test but it?s not absolutely needed.
  2. Setup your test environment (install loadboard, calibrate, insert parts, etc).
  3. Load the new version of the test program. Configure the inputs (from item 1), setup the output (save datalog), and run the program once.? Save the datalog and close the program.
  4. Load the previous (baseline) program, configure the inputs from item 1, and setup the datalog. Run the old program once, save the datalog, and close the program.
  5. Offline (no tester needed), compare the two datalogs. The comparison should include all pertinent details that are found in the test datalog, like test numbers, test names, limits, specs, as well as actual test values.? The expected variance in the test values can be accounted for by allowing for a ?percent difference? for the values.? DataView, a product from Test Spectrum can be used for verifying and documenting the differences between two device datalogs.? DataView allows you to enter a percent-difference value and uses this to determine if the values from the two datalogs are within the allowable difference.

To learn more about this and other techniques for debugging your test program using data analysis, download the white paper: ??Test Program Debug using Data Analysis?