6G and AI: Rethinking Test in Next-Generation Wireless Systems

BUSINESS INSIGHT

WIRELESS / 6G | 5 MINUTE READ

Explore how 6G and AI are reshaping wireless networks. Learn about key challenges, AI integration, and advanced testing for next-gen systems.

2025-02-04

It may feel like 5G isn’t old enough to need replacing, but research into 6G is already well underway, with an industry goal of rolling out the first 6G networks in 2030.

 

During the 5G research phase, three use cases were defined (enhanced mobile broadband, ultrareliable low latency, and massive machine-type communication), along with KPIs for each. A similar set of industry-aligned goals or KPIs don’t exist for 6G, and each company has a slightly different definition of 6G with commonalities across companies. While each company focuses on certain technologies, one technology is common across the industry: artificial intelligence and machine learning (AI/ML).

Figure 1: NI’s 6G Pillars

AI in Wireless

The general public is familiar with AI primarily through large language models (LLMs) like ChatGPT, but AI encompasses many other types of machine learning (ML) models as well. In simplistic terms, an ML model takes in a large set of data, or training data, and creates a model that maps the given inputs to expected outputs. After the model has been trained, the goal is to be able to input any data into the model and get an output that fits the model it was trained on. AI is a powerful tool because it can solve complex problems that can’t easily be mapped to a mathematical model by a human or traditional computing methods.

 

The hope and promise of AI are based on its potential to solve problems more efficiently than traditional methods, using ML models to save resources and optimize highly complex systems. Wireless networks are naturally complex systems with a number of elements that could be optimized with AI.

 

The 3GPP, the standards consortium that defines cellular standards, has been exploring AI’s potential in three key topics—channel estimation (channel state information), beamforming, and positioning. These applications, which are computationally intensive, could see significant efficiency gains with the adoption of AI.

 

Several important things need to happen before an ML model can be adopted and put into a commercial product. First, the model must be trained. Unlike the internet-based training data for LLMs, there isn’t copious, freely available data for wireless applications. Network operators have data based on the activity on their networks, but they are unlikely to share it broadly (and certainly not for free) for a handful of reasons, including privacy concerns.

 

Data can be gathered through software defined radios and test equipment or generated synthetically using simulations. Regardless of the method, producing these data sets is a time-consuming process. Current industry standards around how the data sets are labeled do not exist, making it challenging for data to be used across vendors. While challenges around data sets remain, researchers are working through them. For example, NI and Northeastern University released the RF Data Recording API for generating training data.

 

After a model is trained, it must be validated. The model itself needs to be tested to understand how it performs. This validation is typically done with software. The device that the model is embedded into also needs to be tested with the AI running on it. NI has developed a solution using USRP Software Defined Radio Devices to benchmark the performance of ML models inside real-time, end-to-end wireless networks. This benchmarking helps engineers measure the performance of ML models and to compare with traditional methods to better understand the gains and tradeoffs of using AI.

 

After an ML model is trained and integrated into a product, the final product must be tested. In a typical testing scenario, there are predefined inputs and outputs, along with a finite set of situations the device under test (DUT) is expected to encounter. Clear parameters are established to define correct and incorrect responses. However, with AI, the number of test points that need to be evaluated is nearly infinite. The DUT needs to be tested to identify any corner cases or other failures caused by issues with the ML model in addition to failures related to the DUT itself.  Selecting the right set of test points is a new challenge for test engineers who test AI-embedded DUTs. Engineers require a strategic approach to test design that balances coverage with practical limitations to ensure both the DUT and its integrated AI function reliably under real-world conditions.

Navigate Your AI Journey with NI

Upcoming 6G wireless technology and AI-embedded devices are set to revolutionize wireless networks. However, researchers face a new set of challenges in adopting and commercializing these AI-powered devices. Traditional testing methods are insufficient for handling the infinite test space, especially when time to market and quality are critical. To keep pace with emerging technologies, what we consider “advanced” testing and analysis today will soon become tomorrow’s standard practices.

 

Now is the time to include AI in your technology roadmap. Organizations that begin investing in infrastructure and building their AI expertise today will be better positioned to scale and stay ahead as new technologies unfold.