From Saturday, Nov 23rd 7:00 PM CST - Sunday, Nov 24th 7:45 AM CST, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Increase Device Quality by Improving Your Production Testing

There was a time when a green light on an in-circuit test machine and a thumbs-up from the operator were sufficient to ship a product, but those days have passed. Test rigor, precision, stability, and documentation—now a priority for most manufacturers—stand only to increase in importance.

The premium-quality electronics growth trend is changing the expectations of consumers throughout the market and increasing the challenges faced by test engineering.

To be competitive, devices must meet significantly higher quality standards, have more functionality, and still be available at an accessible price-point. For example, when GN Audio (Jabra) created a new market category with the launch of their miniaturized wireless earbuds, they had to match the audio quality expectation set with other larger products in their line. High-definition music, reliable wireless, and extended battery life were once product differentiators; now they are table-stakes. These higher standards create pressure on test engineering to maintain product quality while the complexity and precision of test stations increases.

A headphone story is one we all can relate to as consumers of these or similar products: The more we are asked to pay, the higher our expectations of functionality, hardware reliability, software stability, and overall design. Design and test engineering functions are both critical to overall product quality:

  • To create a functional and reliable product design requires creativity, understanding of downstream processes, and failure data from both manufacturing and product returns.
  • To ensure that every product is assembling per this design requires knowing what to test, how to test it, and having faith in the accuracy of the measurements taken. 
Building Collaboration to Ensure Device Quality

At many companies, the walls are too high between test engineering and R&D. Projects still get thrown over them instead of passed collaboratively back and forth. If test engineering is not in the room when product decisions are being made, you have a problem. Close and early collaboration between test and design functions is the most impactful tool to improve product quality, as it safeguards the ability to perform comprehensive test. 

Consider the medical device industry, where life-critical devices such as pacemakers prioritize product quality above all for good reason. Chris Robinson, who manages the global test team at Medtronic, is clear about this:

The role of data maps directly to patient safety; [if there is ever an anomaly or a problem in the field] we start failure-analysis immediately, and the first thing they look at is the production test data.

-Chris Robinson, Global Test Manager, Medtronic, USA

 

If you talk to test engineers at Medtronic, Boston Scientific, or Mindray, they use phrases such as “test advisor,” “test champion,” or “production consultant” to refer to test engineers who participate in product design meetings right from kickoff to ensure that a product can be tested correctly. "Design for Test" is not a new concept, but for it to be effective, the practice must be actively engaged by all stakeholders—not just in theoretical process charts and PowerPoint slides. Key areas for consideration during these meetings are the breadth of functions to be measured, appropriate test limits to ensure operation, and accessibility of connection points to the device during test. Investing time with design team management to reinforce the benefits they will feel from test input—such as easier design iteration and on-time product release—is beneficial; some test teams go as far as offering to help characterize and then verify designs (at least initially) to secure that seat at the table.

The R&D-test relationship should not stop after test specification is documented. Test data must find its way back to product design. To demonstrate, let’s examine product two case studies: 

  • S&C Electric is a leading manufacturer of high-quality switching, protection, and control products that form electric power systems. Products are large, complex, expensive, and sold to utility companies with whom S&C have strong long-term relationships. A product failure could cause power grid outages or safety concerns. Because of the known customer-set and long-term deployment, maintenance and repair engineering can provide datasets directly to R&D for design iterations.
  • Dyson is a leading manufacturer of high-quality home appliances. Products are smaller and cheaper compared to those from S&C, but priced at a premium within their market—a luxury they enjoy due to their brand-leading reputation for quality. Users don’t expect to ever meet a repair engineer; they expect perfect operation for a number of years, followed by product replacement. In cases such as this, R&D looks to production test to provide the data they need to identify potential causes of failure. What causes a drop in yield today may cause an unhappy customer tomorrow. 

Now ask yourself: Are you an industrial power switch or a vacuum cleaner? If you are closer to the second case, are you effectively providing test data back to your R&D organization? If you aren’t, expect that request soon, and assume it to include not just access to your database, but also searchable, meaningful test results, trends, and observations.  

This need for test data from across organizations is helping drive digital transformation initiatives. As companies move beyond buzzwords like Internet of Things and Industry 4.0 to implement meaningful connectivity in their test stations, they gain better insights into product manufacturing and improve quality.

Figure 1. Digital test-station transformation can start today with data and systems management.

It can take time to put in place the IT infrastructure needed to take full advantage of digital transformation, but there are things you can do now to see rewards:

  • Use a data format that makes sources from across the company comparable.
  • Record contextual metadata alongside your test data so when a failure happens, you know the operator, the machines/instruments used in both assembly and test, and even environmental data around manufacturing conditions.  
  • Look into tools such as NI SystemLink™ software to achieve standardized connectivity between test stations and databases to visualize and report on trends. 
Identify the Root Cause of Untrustworthy Test Data

To highlight the problem of accurately and reliably measuring adherence to specifications every time, let’s learn the story of a large electrical machinery manufacturer. Their functional test group perform end-of-line tests on enclosed electrical control units. The PCBAs were manufactured at a different plant and shipped in for final assembly. When tested 10 times, the same device could pass six and fail four. Other teams questioned the test-data validity, which soon escalated and brought unwanted attention on test from higher in the company. To restore confidence in test quality, the engineers had to troubleshoot—fast.

Measurement error such as this can stem from anywhere along the signal path. Noise introduced in fixture wiring or a badly design routing board within the fixture are hard to tie down without complete disassembly. Then, you have to consider your switching architecture, your mass interconnect, your cabling thermal conditions (thermocouple effect), and the list goes on. But for this example, let’s think through two common causes: Measurement accuracy and software bugs.

Measurement accuracy

Despite the technology existing to solve most issues with measurement accuracy, errors still abound. The errors are often caused by misunderstood instrument specifications and confusing documentation or over-compromising to meet budget constraints. For example, two similar looking 16-bit ADC voltage input cards can have significantly different absolute accuracy. New-to-market low-cost data acquisition options, with their varying spec-sheet detail, represent a risk for unsuspecting manufacturers who may start seeing false positives. Two best practices can help:

  • Team-up purchasing and technical functions to discuss choices, rather than relying on emails or documents in which important details can be missed.
  • Partner with an established instrument vendor with a track record in quality, analog, front-end design and well-published specifications detailing how accuracy is calculated.

Timing accuracy is often overlooked. Triggering and synchronizing aligns DUT stimulus, response, and measurement. Without accuracy here, you never can be confident in the relationship between cause and effect. Converging as many instruments as possible into a form factor such as PXI, which shares timing signals across a chassis backplane, rather than through external wires, can reduce timing errors.

Figure 2. Timing accuracy can be addressed using instrumentation connected over a chassis backplane.  

Software Bugs

Often, an error exists in the software analysis—not the physical signal. The path to error-free code is paved with investment in proficiency, standard practice, and extensive testing, while the most common routes to software issues are haste, ill-education, and lack of reuse. Reuse or standardization is the biggest contributor to quality, as the increased ROI allows for greater time investment into each piece of code.

Figure 3: Solution diagram highlighting that only a new XML configuration file and DUT fixturing must be developed for new product designs. 

Neil Evans, Test Manager at Philips, champions a loosely coupled, modular architecture when discussing the work of his team:

A software architecture with minimal dependencies between code modules was developed, meaning that each function can operate in the same fashion independent of the context within which it is being run. This allows for significant code reuse of well-written, verified code, which improves time-to-market, code quality, and the number of regulatory recertifications.

-Neil Evans, Senior Manager, Philips, USA

Read this story of standardization success within production test at Philips Ultrasound.

We Are Here to Help

There’s no silver bullet to ensure product quality. However, a combination of thoughtfully designed instrumentation, carefully built processes, and collaborative data sharing gets you at least most of the way there. Test is not a solo activity, it’s a team effort. Treat your integration partners and product vendors as experts in their field and expect a higher level of support and service.

NI is committed to support you in this journey to improve your test stations, test data, test strategy and, overall, your product quality. You don’t have to navigate the treacherous waters of test alone. Let’s talk about how increased DUT quality is causing you test challenges and how NI can help.  

Next Step: