Verification and Validation (V&V) engineers build various types of testers. With more and more devices designed with built-in, software-controlled intelligence, product complexity grows beyond intricate mechanical design. Some must rapidly change or update their test systems to keep up with the increased complexity, while others find they must support a mix of legacy and new testers. Tester types range from simpler benches to sophisticated hardware-in-the-loop (HIL) systems. However, what’s common for every type of system is that it must acquire and log data that tells us about how the product is performing.
Product complexity increases because technology advancements are a competitive advantage that companies can benefit from if they can get to market faster than their competitors. This timing adds additional pressure on V&V engineers who feel the time constraints when they are designing a new tester, including the development of new test software and routines. So, it is understandable why engineers can feel rushed. But when time is sparse, we end up prioritizing risk over quality (unless we are in a regulated market). Instead, we should think about how to gain efficiency by standardizing how we build, manage, and maintain our test systems. This path to speed avoids sacrificing quality.
When it comes to increasing efficiency related to test systems, we need to start with what we are doing today. Because V&V test teams often need to test variations or completely different products, they need to be ready to build new testers that can test products with a variety of requirements and specifications. Newer products might be built with software that controls the device itself. In true Industrial Internet of Things (IIoT) style, they gather and publish information to a network. Older products might be based on electrical and mechanical features, or they may simply be mechanical without any electrical components.
Ultimately, it doesn’t matter if your devices are cutting-edge, using state-of-the-art technology, or if they are built on tried and true electrical and/or mechanical engineering designs. All products need to be verified and validated in V&V before released to manufacturing. The time available to perform V&V test depends on several factors. Is there a market window to meet? A slip in schedule from design teams? Maybe V&V tests identified design issues that caused board redesigns and respins, pushing the timeline back months. All of these pressures for shorter deadlines forces risk over quality, skipping test steps that were originally in the test plan.
The speed of innovation must—and does—continue to accelerate. But the available resources for test teams remain the same, or even declines! This is, unfortunately, nothing new. It continues to be one of the biggest drivers to why we end up being reactive, scrambling to build testers.
To become proactive instead, we must look at how our systems are built, managed, and maintained. One of the biggest signs of a reactive test group is many “custom” test systems and test rigs. In other words, reusing or repurposing equipment between setups is not happening as it is complex and takes a long time. Another sign is having testers that have been “built multiple times.” When you have a broad skillset across your test engineers they may supplement each other well, but sometimes the person who originally built something has moved on and it’s faster to simply rebuild the tester than to spend time understanding what is going on in its current state.
While you may be used to situations like these in your organization, they present opportunities to standardize. When you standardize, you can get one step closer to being proactive.
From standing up new testers to maintaining a fleet, the benefits of a standard approach are numerous:
At a high level, a standard approach can drive efficiency in your teams. But of course, there are other things to consider. As you might expect, we can divide it into two categories—hardware and software. Let’s take a look at how we can get started and what considerations we need to make to implement a successful test system standardization.
As you look across all of your testers, you might find that while standardization is needed to drive efficiency, you have many different systems to consider.
From a hardware point of view, flexibility, scalability, and your approach are important factors in test system efficiency.
V&V tests are often performed on a spectrum with various levels of complexity. On one end, you might need an open-loop control and measurement system where the user can control the test and start and stop it as needed. Whereas on the other end, you might need highly complex and automated systems for your HIL/embedded software test.
To realize efficiency gains, consider the needs along the spectrum. The approach you choose should accommodate needs on both ends of the spectrum. This becomes even more important as new technologies are developed, spawning the need for new types of measurement.
A standard approach must have the flexibility to incorporate various types of existing measurements and continuously add new capabilities as technology advancements are made. This will ensure you have the highest system readiness, and your V&V test teams won’t have to relearn new approaches every time an advancement in technology occurs. In turn, this increases efficiency and removes obstacles associated with integrating new and different equipment.
It is simply impossible to know what product features there might be five, let alone 10 years from now. However, it is often expected that our equipment will last this long. This makes scalability extremely important, especially for V&V test teams.
Not only do we use some of the most expensive equipment, but it can also be complex to integrate. Even if you are not doing something complex, like simply needing to expand a test bench to be able to test eight devices instead of two, it can be challenging to scale up to handle more devices if you have not selected a platform that is easily scalable and allows quick integration and synchronization of additional channels.
In V&V, system scalability must be considered when choosing which platform and approach upon which you will build your systems. The right approach can drive efficiency if it allows you to adapt, integrate, and expand your systems with more channels quickly. This should be an important part of your strategy so you can save on cost initially but be confident that you can easily scale your systems if you need to in the long run.
We’ve discussed how standardization makes it easier to repurpose and reuse, but sparing is also important. When your timelines are compressed, it is hard to find out that a device needs calibration or doesn’t work as intended a week—or minutes—before a test. Now you must scramble to get a new board, and the test might be pushed out. When you standardize on a hardware approach and platform, you need to find the right balance between cost and the platform approach to ensure uptime with a sparing strategy. Ultimately, this means that what you choose must offer a broad array of equipment and capabilities with high quality and accuracy.
It is not always possible to find an approach and platform that offers all the capabilities you need across all of your tests. When that is the case, consider if you have synchronization, timing, and throughput requirements and determine how easy, or complex, it is to integrate different types of equipment into one system. The more open a platform is, the more likely it is to be flexible enough to integrate other equipment without hurting efficiency and timeline.
The biggest risk when standardizing is to pick an approach or platform that can’t scale to support you in the future. Ultimately, the strategy you pick needs to be right for your team; it is often beneficial to do a thorough assessment before starting your standardizing journey.
As with the hardware considerations, the complexity of the test also matters when you standardize on a software approach. At the low end of the spectrum, you might find turnkey software that can control outputs and measure inputs on your hardware is exactly what you need. This type of software enables test engineers to focus on implementing the right hardware and standing up the test bench faster. However, at some point on the scale, tests will become complex enough that engineers have to build an automated tester that can run through test routines with different internal dependencies. That takes a lot more time.
When test engineers are testing complex products that need to run through many different scenarios, they turn to building automated systems simply because it takes too long to run through all the tests manually. Writing test programs and routines now become an integral part of their job. As you consider your team’s skillset, you’ll likely find that there are various levels of proficiency across different types of software and programming languages. However, as mentioned earlier, if you don’t have a common approach, then managing and maintaining your test systems becomes difficult or impossible over time.
When you start considering your software approach, you will likely hear the word framework again and again. When you have a framework that is open and built with extension in mind, then you can eliminate having everyone on your team building the same foundational components every time a new tester is built. We often forget that we build these same foundational components for each tester, as it is “hidden” within the software. However, this is where the real efficiency gains come from when you consider software.
Ideally, your software approach needs to, on the low end of the spectrum, support quick sensor configuration and data logging without having to develop any code. Although when development is needed, an open approach that provides a framework for your engineers is what you want. Test engineers need to be able to develop, debug, and deploy tests faster, integrating test code written in multiple different languages directly into the framework. Here you’re likely trying to automate your tests, so you also need to consider how easy it is to integrate hardware of different types so you don’t end up with a framework that limits your ability to use the hardware you need.
Overall, standardization is about creating efficiency gains and reducing risks. You need to make sure that everything on the software and the hardware side works together and can seamlessly integrate. You won’t get the efficiency gains you need if you pick hardware with drivers that don’t support the software approach you picked. Likewise, if your software approach isn’t open enough and requires rework across different languages, creating wrappers to execute code, it will be too complicated to maintain and manage when you add new instruments. Ultimately, you need to find the best approach to both and make sure that you standardize on software and hardware platforms that easily integrate and are scalable.
There is more to standardization than just systems. Processes and data are also important. When you are standardizing, the right data can give you valuable insights about not only your product performance but your entire test environment. When you use that in the context of process, you can identify where you have bottlenecks today, which workflows can be improved, and where you can automate steps that today are done manually.
As we have discussed, you can experience many efficiency gains from standardizing how you build, manage, and maintain systems. NI’s software and hardware solutions have been developed with high quality, openness, scalability, and flexibility in mind, so even if you thought you would have to compromise on your software or hardware approach, you can instead use the foundation you have and build a framework and standard approach in even less time. Building on every team member’s skillset and proficiency and tying it together in a common framework that accelerates productivity makes implementing a standard approach more seamless.