Semiconductor test engineers are typically not software developers. They’re electrical engineers that are required to be experts with the devices they’re testing. The knowledge required to understand the designer’s intent, configure the device to match that intent, setup the tester (ATE) to measure the expected output, and then dial in the operation so that it is consistent across process, voltage, temperature is very specialized and takes years to acquire.
You wouldn’t take a software developer and stick them in a test engineering role and expect them to succeed. Similarly, you wouldn’t hire a test engineer to write a pure software application. Yet test engineers are often required to write large, complicated programs for testing their devices. Test programs typically don’t have complicated data structures or algorithms but their interactions with the device, tester, and loadboard can quickly make them difficult, complex pieces of code similar to large, multi-threaded applications. And to further complicate things, test engineers are often expected to learn new tester platforms as their device technologies change. Changing tester platforms often requires learning a new development environment, programming language, and all the “gotchas” that come with it. It’s tough to stay on top of the day-to-day requirements of developing and maintaining a good test strategy.
I’ve spent quite a bit of time on many different testers and development environments. I’ve also spent a lot of time developing software tools and applications. There are several techniques and processes that are commonplace in the software development community that the test engineering world can benefit from. Here are a few examples – and sorry about the rhymes, I was channeling my inner Muhammad Ali…
- Reuse your code; reduce your load – Software engineers are highly efficient at reusing code. Test engineering tasks are many times very specific to a particular device but reuse is still possible. Organize your functions and methods based on the DUT’s interfaces and test requirements. Build reusable modules where possible – tests like Continuity, Leakage, IO levels, Idd, SPI, I2C, JTAG pop up again and again across device types and technologies. Create a good working module once and reuse it again and again. I’ve created empty “Shell Programs” for most environments that I use as a starting point for new test programs. I just pop in a pinmap/channel map and I can usually verify gross functionality in a matter of minutes.
- Clean up your mess before you progress – test engineers understand this one… undo any hardware connects and reset your instrumentation before you run that Test statement. Why? Most testers will respond with a run-time error if you try to program a piece of hardware that’s been disabled because of a failing test. This may seem like basic stuff but I would argue that the majority of run-time errors in test programs are because a test site has been disabled and the code is trying to program hardware that’s been shut down. The software engineering analogy would be to delete your data structures and free any memory before moving on or spend hours locating memory leaks.
- Revision control will keep you whole – CVS, SVN, Tortoise, ClearCase, whatever… just use it! Sure, having 10 terabytes of copies of your test program may be a fun way to torture your IT guy but it can be a nightmare trying to find the one working version of the many, obscure test patterns buried inside a full-blown test suite. Revision control tools allow you to quickly check-in or check-out code, perform “diffs”, restore previous versions… the list goes on and on. Tag that one working pattern when you check it in, then peruse the logs when you have to find it again. Your latest changes broke a test? Do an automated “diff” with the last checked-in copy and quickly locate the problem. There are several free versions of RC software available. You can run the software directly on the tester or get the benefit of a quick backup by hosting the server on a different computer.
- Ignoring an alarm may cause harm – software engineers love compiler errors and warnings. Why? Because they are much easier to debug and clean up than run-time errors. If your test environment allows it, set the compiler error and warning level as high as possible and don’t ignore the output until you understand it. Likewise with instrument warnings and alarms – tester manufacturers put these warnings in to tell you something is misbehaving. You can almost always change the sequencing of your test to get rid of alarms. Disabling alarms should be the very last recourse and done only when you really understand the nature and cause of the alarm.
- Use the tools to learn the rules – similar to the software development world, most test environments now use Integrated Development Environments (IDEs) for running, setting break traps, viewing the state of variables, etc. Study up and learn the operation and features of these IDEs… you’ll quickly make up the time many times over while debugging. Additionally, test engineers have an abundant supply of debug tools at their disposal. Many of these will show connections, programmed/measured values, settings/options, and some even provide code examples. I frequently code up as much of a test as I can figure out from examples and manuals, then set a trap in the code and use the tools to finish the job. Use the tools to verify everything is connected, the ranges and setup values are correct, verify initialization has been done, trigger measurements, and more. If you can do it with the GUI tool, you can write the code to do it programmatically.
- Test before release or recall the beast – AKA, take the time to test your test program. Software engineers use regression testing to make sure recent changes don’t break proven, working code. Test engineers can do the same by spending some up-front time verifying the output (datalog) from a known working solution, then comparing the output of later revisions to the proven solution. Many “minor limit changes” have resulted in major product recalls that would have easily been caught by testing the output to a known-good baseline. See my previous blog, “Testing your Test Software: Regression Testing” for more info on this topic.
The last item listed, testing your test software, is perhaps the most important to ensuring a quality test program. There are numerous potential errors and pitfalls specific to ATE test software that should be checked – for example, the presence of default and error binning, valid hard and soft binning, error handling, and many more. Test Spectrum has developed a new product called “CodeReport” that is designed to help test engineers identify and correct these types of errors and verify the quality of their test program. With CodeReport, you can screen your test program using a growing list of “known issues”, write your own custom quality rules with the graphical Rule editor, or view the tester resource requirements and code statistics with a click of the mouse. Many other features are included that will help test engineer’s consistently produce high quality ATE test programs. Follow this link to learn more about CodeReport.