Transforming Test with AI: Embracing Innovation and Managing Risk​​

published

05.14.2024

A 2022 NI survey of 300 test professionals in the U.S. highlighted a pervasive challenge. Most organizations were struggling with their most important metrics: product quality, speed of test, and time to market. A striking 57 percent of respondents feared their production processes were outdated and couldn’t keep up with new business and technology trends. 

As product complexity increases, these challenges will only intensify. Applying AI technologies to test is a promising approach to shorten time to revenue, reduce test engineering costs, and improve measurement data analysis. While AI offers the promise of significant improvements in development and test processes, it also comes with new challenges and risks. 

Many test leaders are uneasy about the accuracy of results as well as the privacy and security of their data, and these are legitimate concerns. However, apprehension should not block progress and innovation. Security protocols and process controls can address and mitigate risks surrounding AI. The risk of moving too slowly on AI is far greater and could be a barrier to business growth. 

Ultimately, AI adoption is inevitable. Today’s cutting edge is tomorrow’s status quo. Nevertheless, it’s critical to be deliberate and cautious when plotting your way forward.  

The Case for AI  

Much like the graphical user interface (GUI) and PC revolutionized test and measurement many decades ago, applying AI technology to test can be transformative and drive a competitive advantage by delivering productivity gains, deeper analysis and insights, and incremental quality improvements. 

Productivity Gains

As resources and budgets are squeezed, test leaders are looking for ways to improve productivity and drive speed without compromising quality. AI-accelerated test automation and AI data analysis are powerful levers for streamlining operations, improving resource utilization, and broadening test coverage. 

AI technologies accelerate test engineering workflows by eliminating tedious manual tasks. Generative AI can translate requirements into tests, search existing code for reuse, and identify devices or instruments suited to the test system requirements. AI data analytics correlate and automate the interpretation of vast data sets, identifying patterns and anomalies that would take humans much longer to uncover.

Speed is crucial for accelerating time to market and boosting productivity. Instead of spending time performing mundane tasks and crunching data, test engineers can focus on problem solving and higher-value activities.  

Deeper Insights

Besides accelerating speed to insights, machine learning algorithms are continuously learning to produce deeper insights. AI data analysis can break down data silos and rapidly sort through big data from design, validation, and production to illuminate unexpected findings, produce predictive analytics, improve collaboration, and inform advanced decision making.

Advanced data analytics with digital transformation and AI tools have enabled deeper insights at Jaguar Land Rover. The automaker estimates that it now analyzes 95 percent of test data, up from 10 percent with legacy systems. Test data is leveraged beyond the initial test station, and metadata analysis is used to reduce test costs, improve traceability, and accelerate root cause analysis.

The evolution of generative AI and analytics has the potential to deliver even more profound analyses with its ability to interpret unstructured data, create, and simulate. For example, it will enhance the ways synthetic data and digital twins are created to provide more advanced modeling and insights.

Quality Enhancements

Products are more complex than ever, with advanced features, electronics, and embedded circuits. Leveraging AI data analytics to compare validation and production data improves precision, expands test coverage, and enables earlier detection of potential issues. 

Test and measurement data is especially critical for products with embedded AI, like smart devices and autonomous driving (AD) systems. When a product is trained with algorithms instead of programmed, testing processes must be trustworthy to ensure product reliability and consistency. AI helps provide the necessary coverage and systematic data.

Overall, when manual work is reduced, operations are streamlined, and data is leveraged across functional groups, substantial quality improvements are unlocked. And nothing sharpens a competitive edge like a boost in product quality and reliability.

Distinguishing Between AI Technologies 

There are a variety of different types of AI, but with the recent evolution of generative AI applications, it’s beneficial to distinguish between traditional AI and generative AI.

Traditional AI

Traditional AI models are trained and will evolve and improve as new data is acquired and applied to machine learning algorithms. Traditional models power tools such as email spam filters, fraud detection, social media advertising, and e-commerce recommendations.

The introduction of traditional AI into industrial use was marked by its role in vision and image processing tasks in quality control inspections for items ranging from semiconductor wafers to consumer products. As AI technology has advanced, its applications have broadened to encompass predictive maintenance, real-time monitoring, and the detection of irregularities.

In the context of test systems, traditional AI excels at complex calculations and classifying large amounts of test data. AI data analytics quickly detect patterns and anomalies, identify complex relationships and dependencies, and produce predictive analytics.

User-friendly software with AI capabilities and test schemas allows test engineers to focus on their product—without having to be master programmers. Specialized test software can even integrate with programming language ecosystems, unlocking insights with advanced algorithms, statistical analysis, and simulations. 

Generative AI

Generative AI applications include ChatGPT, Microsoft Copilot, and Midjourney. While previous AI technologies were limited to sorting and interpreting existing data, generative tools create new text, images, and videos. 

In a test environment, generative AI tools can translate requirements into tests, search existing code for reuse, and identify devices or instruments suited to the test system requirements. Generative AI use cases in data analytics correlate data between design tools to test and measurement tools to drive advanced analysis. Traditional AI tools can be augmented with generative AI to improve data storytelling with generated insights. 

While advances in generative AI are exciting, robust controls and policies are currently required for human oversight because it can be prone to errors, bias, and intellectual property concerns. Moreover, since generative AI tools use inputs to improve outputs, test engineers must be well versed in which plug-ins, extensions, and features to disable when working with sensitive data.

Foundations for Success with AI

Regardless of where an organization is in its AI journey, developing a data strategy, building test data sets, and establishing AI policies and controls are foundational for long-term success. Test leaders can take steps within the test organization to prepare for these broader organizational initiatives.

Develop a Data Strategy

Test managers should develop a data strategy and pinpoint goals to be achieved with test data. Laying out short-term and long-term objectives will help develop a roadmap and determine future resource needs. 

This groundwork will guide which metadata, instrument data, test measurements, and operational information must be aggregated and stored. It’s also critical to categorize sensitivity levels based on importance, confidentiality, and the potential impact of exposure or misuse.

A well-defined data strategy ensures the effectiveness, accuracy, and scalability of AI systems. It is also a prerequisite to gearing up for the Industrial Internet of Things (IIoT) and connected test systems. 

Build Up Data Sets 

AI works best when there are large data sets with reliable information. Test leaders should consider the characteristics and parameters important for product quality and reliability. This data should be stored and tagged now—even if the organization isn’t currently ready to analyze that data.

Test automation tools allow test organizations to build up data sets with minimal resource strain. Automated tests are conducted consistently, and test results are collected systematically.  

Establish AI Policies and Process Controls

If the organization is not already developing an AI policy, this initiative is worth highlighting to upper management. Cutting-edge companies are establishing AI policies and roles to mitigate the risks associated with AI. Corporate governance, rules, and process controls help ensure employees use AI safely, ethically, and responsibly.

In the meantime, test leaders should implement security protocols and process controls within the test organization to protect sensitive data and ensure outputs are always verified. This is especially critical when using generative AI tools.

Join the Conversation Around Test and AI

AI is a reality to embrace. However, it’s critical to balance the risks and rewards of these new technologies, especially when it comes to sensitive and proprietary data. NI is your trusted partner in test and measurement, bringing the latest AI technology—safely and responsibly.  

Stay tuned in the coming months as NI launches a series of articles about preparing for your AI journey to drive productivity and deeper insights.