-
- [White Papers] AI-powered RCA
- Table of contents1. Introduction 2. What is AI RCA? 3. Technology applied to model. 4. Learning and evaluation 5. XCAP-Cloud with AI Powered RCA6. Use Cases 7. Future Directions *Under R&D collaboration with Korean MNOs IntroductionWith the rapid development of wireless communication technology, ultra-high data transmission speeds and connections to various devices increase, the communication environment is becoming more diverse and complex. Accurate and rapid response to communication system failures resulting from this diversity and complexity is essential. To address these market demands, we provide automated wireless network optimization testing solutions as well as logic-based RCA solutions that identify the causes of various defects that occur in mobile communication networks and provide appropriate solutions.Our logic-based RCA solution utilizes wireless network communication protocol transmission/reception information and terminal status information to accurately identify and resolve problems through structured data and rule-based analysis. However, in this advanced communication system, parameter settings for each analysis rule are complex, and it is difficult to consider characteristics of the field situation, limiting rapid response.To overcome these limitations, we developed a machine learning-based RCA solution. By utilizing the latest machine learning technology to learn subtle differences in the network environment hidden in the vast amount of data collected from mobile communication networks, large-scale data can be quickly analyzed and diagnosed based on the data without relying on individual subjectivity. This is expected to contribute to improving the stability of communication systems. What is AI Powered RCA? Our solution is a machine learning-based RCA solution that utilizes our automated testing solution to perform root cause labeling using raw data obtained from network access failure and service interruption log samples. The training dataset contains the network\'s signal level and quality indicators, as well as network quality indicator metrics such as data throughput, latency, and packet loss rate for each layer.Figure1. AI RCA concept diagramThis training dataset consists of approximately 1 million log data, including network issues that occurred in various environments during field testing. This data is collected through a variety of methods, including field testing, simulation testing, and laboratory testing, to reflect the complexity of network problems. To address the complexity of the problem presented by strong correlations between key indicators, our model minimizes similarities between data characteristics and learns each root cause individually to enable accurate classification and understanding.Additionally, machine learning has the characteristic of being capable of continuous learning and improvement. This means our solutions can continuously optimize and improve models to respond to new challenges that arise during real-world operations. This offers great advantages in maintaining reliability in a rapidly changing communications environment.Additionally, AI\'s automated decision-making capabilities help quickly process and diagnose large amounts of data. Our model supports efficient and accurate communication system problem solving while minimizing individual subjectivity through data-based judgment.These technical advantages allow our solutions to leverage the powerful analytical power of machine learning to help improve network reliability and improve availability and performance. Figure2. Examples of data set Technology applied to AI Model We adopted a gradient boosting ensemble model using XGBoost and developed a powerful tool for effective root cause diagnosis. XGBoost is applicable to both classification and regression problems and is particularly characterized by excellent performance for a variety of data sizes and types.Figure3. XGBoost concept diagram XGBoost features provide excellent performance for a variety of data sizes and types. This is very useful as we deal with large amounts of network issue data from a variety of environments. The model learns useful patterns from large amounts of data and can effectively identify root causes. Additionally, XGBoost uses parallel processing and optimized data structures to provide fast learning and prediction speeds. This is a big advantage of XGBoost\'s fast learning and prediction capabilities to quickly respond to various problems that occur in large-scale networks.For root cause diagnosis, prioritizing each characteristic and creating each XGBoost model based on this increases the interpretability of the model and provides a clear understanding of the characteristics of each root cause. Through these feature priorities, the model learns the importance of features for each root cause and enables effective response to the complexity of the problem.In this way, our model using XGBoost combines strong prediction ability with fast response speed to achieve efficient root cause diagnosis. Leaning and evaluation process Figure 4 shows the learning and evaluation process. First, the data is purified through data preprocessing. Duplicate data is removed, and outliers and missing data are replaced with the most prevalent value in each data set. Lastly, for data imbalance, down-sampling using Euclidean distance calculation and up-sampling using SMOTE are performed to balance the data.In the Euclidean distance calculation method, the distance of points other than the root cause label is compared one-to-one based on the center point in the distribution of the root cause label. After comparison, points that are judged to be too far from the distribution of the root cause label are removed, leaving only the points that exist as close to the boundary as possible. The SMOTE method synthesizes adjacent minority class samples between majority class samples. This increases the number of samples of minority classes, helping the model learn better and recognize minority classes better.Figure4. Training process and evaluation processAfter performing this preprocessing process, an XGBoost model according to each priority is created and trained and verified. During verification, a grid search technique is used to find the most appropriate hyperparameters for the model. If you input the data set you want to test into the learned RCA solution, you can get results where the root cause label predicted by the model matches the actual root cause label.The evaluation results are as follows: It was derived as the average accuracy value of each model. Total countSuccess countFail count Accuracy 5G-NR PS 27365242433122 88.59% 5G-NR Voice29903 25376 4527 84.86% LTE PS90505 82813 7692 91.50% LTE Voice29501 26958 2543 91.38% Total177274 159390 17884 89.91% Figure5. AI model evaluation results XCAP-Cloud with AI-powered RCA AI-powered RCA is provided by XCAP-Cloud, a cloud-based mobile network analysis solution. We use test equipment to collect data generated from the telecommunications carrier\'s mobile communication network. Collected data is uploaded to the server in log form. Users can define rules to identify specific patterns in logs and send them to the AI model when those patterns are found. Logs must be interpreted before being fed into an AI model. Log interpretation involves understanding the contents of the log and extracting KPIs. Interpreted logs and extracted KPIs are sent to the AI model through grpc. grpc is a protocol for efficient and reliable data transmission. ARCA infers the root cause based on the received data.Inference results can be checked using various visual tools provided by XCAP-Cloud. Visual tools help you intuitively understand inference results. Figure6. XCAP-Cloud The accuracy of AI prediction results extracted in real time from the XCAP-Cloud system equipped with an AI-powered RCA solution was observed to be 97%. Figure7. Real-time AI prediction accuracy Use Cases All issues that may arise in the 5G/LTE environment are categorized and managed through RCA, and each event is labeled to help easily identify the root cause. VoNR Call Setup Failure CaseVoice services using 5G RAN, 5G Core, and IMS are called Voice over New Radio-VoNR. NR UEs can perform voice services directly on the NR network without falling back to the LTE network. VoNR Call Setup Failure can occur for a variety of reasons. In the initial network construction stage, Cell Search failure, PDCCH Decoding error, IMS Registration failure, etc. are the main causes that can cause problems in which the terminal cannot connect to the network or register with the IMS server. This solution can quickly classify the problem by extracting the cause of cells that cause many setup failures. VoNR Call Drop CaseAs a mobile communication system, even if the initial setup is successful and the call is connected normally, a call drop may occur when entering the cell edge or when handover occurs due to RF deterioration or when the settings of the source cell and target cell cannot maintain connectivity. can. In addition, even though information about neighboring cells is searched periodically, when a call must be continued without finding a suitable cell, a large amount of RTP packets is lost, and the network reclaims radio link resources to cause a call drop. can. This solution can help you accurately analyze call drops. By additionally checking the packet data and Layer3 messages provided within XCAP-Cloud, you can quickly identify problems and take appropriate action. Figure8. RCA Workflow NR FTP Low Throughput CaseNR FTP Low Throughput CaseAfter the normal call setup process is performed in 5G NR, you may experience quality issues in data calls such as FTP and HTTP with processing speeds lower than expected.Typically, degraded RF performance may indicate low throughput, while normal RF performance may indicate parameters related to throughput and capacity. It can be caused by various reasons such as UL/DL bandwidth, MCS, Layer, Rank Index, etc.This solution can help with accurate analysis by extracting the cause of low RB allocation that occurs in the network and the resulting low throughput cases. Future Directions We aim to provide intuitive and versatile solutions to diagnose wireless network problems quickly and accurately.A successful AI model must not only achieve high accuracy, precision, and recall, but also ensure reliable prediction performance at a level that actual customers can use as wireless network analysis indicators. To achieve this, AI models must learn the know-how of highly skilled wireless network analysis experts and continuously evolve.Our goal of a wireless network analysis know-how training system is to build a customer-tailored AI model learning infrastructure to provide a system that allows customers to directly discover data sets and upgrade AI models. Through this, concerns about personal information and data leaks can be resolved, and AI models that meet customer needs can be built more effectively.In addition, if only data from the wireless network connection section of the mobile and base station is used, the root cause of the failure due to problems with the upper layer access probe or core probe may be unclear. Therefore, there is a need to develop it into a comprehensive learning model from end to end. We will continue to take on this challenge without stopping.Figure9. Customer-tailored AI model learning system
Jun 05, 2024
-
- [White Papers] A New Standard for Video Quality Assessment, VQML
- .center-table { margin-left: auto; margin-right: auto; border-collapse: collapse; } ContentsNecessity of VQMLWhat is VQML? Techniques in VQMLLearning & Evaluation Process of VQMLInput & Output Value Dataset Utilized in VQMLOutstanding Points of VQMLHigh Performance of VQMLVerify Reliability of VQMLNecessity of VQMLThe mobile network market is growing explosively due to the rapid growth of 4G and 5G subscribers and the expansion of service coverage. According to the Ericsson Mobility Report in November 2020, 5G subscribers are expected to exceed 3.5 billion by the end of 2026.In this situation, the growth of mobile video services is no surprise. Figure 1 shows the traffic volume of mobile data by media measured by BI Intelligence. In 2020, video services account for about 75% of the total mobile data usage. Combined with the growth of 5G and the COVID-19 situation we are facing today, this number will grow even more and mobile network operators should be prepared for it.Figure 1. Mobile data traffic usage by mediaThe performance and quality of the video services that users experience are critical aspects of the network operations. In live-streaming services that transmit video over the network in real-time, videos may not be completely transmitted due to various loss issues in the network. This may result in different qualities of received videos between consumers. To prevent such situation, we need to find a way to accurately measure the video quality in real-time and ensure consistence in good quality of video transmission on mobile networks.Innowireless’ VQML is introduced as a deep learning-based, high-level video quality assessment solution. Compared to other traditional evaluation methods, VQML is distinguished as a better solution due to its faster processing speed, cheaper implementation and no need for original reference video.As a new way to measure video quality, VQML lets mobile and broadband network operators meet customer expectations and needs with optimized network operation.What is VQML? The simplest way to measure the quality of a video is to obtain the MOS (Mean Opinion Score) using human judgements. This is also the most accurate way, but it requires too many people and too much time and cannot be proceeded in real-time.In order to resolve these time and cost problems, VQML takes high volume of video data and continuously trains a neural network to predict the video quality score as accurately as the human judgements. Using deep learning, VQML learns patterns of videos and MOS values from a database derived from viewers’ large-scale surveys.VQML predicts the quality of the video as a MOS value within a range of 1 and 5 which corresponds to the actual human perception.The meaning of each score is as follows. Score Quality Perception 5 Excellent 4 Good 3 Fair 2 Poor 1 Bad Figure 3. Video quality assessment in VQMLTechniques in VQMLMethods for Video Quality Assessment (VQA) include Full Reference (FR), Reduced Reference (RR), and No Reference (NR). Figure 4. Video Quality Assessment SolutionsFR, which is currently used by most products, evaluates the quality by comparing the original video with the received video. Although it shows high reliability due to direct comparison between videos, it is difficult for the client to have the original video. It is also difficult to be used on platforms where videos are created and serviced in real-time.For this reason, FR method is not a suitable solution due to its limitations especially in current situations where live-streaming services are commercialized, and video conferences and classes are increasing due to the Covid-19. VQML operates based on the NR method that measures quality only with the received video. Without the need for the original video, NR method can calculate the quality metric of any video in real-time and effectively assess the video quality in areas where FR methods are difficult to apply.Some well-known areas where NR methods are ideal quality assessment tools are CCTV and real-time video platforms. These areas can utilize the NR method to measure the actual perceived quality by identifying all the degradation factors on their own video without a comparison video.NR method mostly measures quality by extracting statistical characteristics of the video with mathematical algorithms. It works based on the KPI designed by researchers, so it may result in large average error compared to the actual quality score perceived by humans.VQML uses large-scale and highly reliable database to repeatedly train itself to continuously improve its prediction of video quality score in order to compensate for the limitations of the NR method.Learning & Evaluation Process of VQMLThe training of VQML uses the KoNViD-1k and YouTube-UGC datasets. The KoNViD-1k dataset is a large database consisting of MOS collected from over 1,000 videos and dozens of viewers. The YouTube-UGC dataset refers to a video database with 4K UHD characteristics. VQML’s deep learning network consists of 2 CNN modules and 1 GRU module.Figure 5. Learning Process of VQMLWhen a video sample from the learning dataset is input into the model, the CNN module processes each frame and the GRU module follows to analyze the sequence of consecutive frames in order to recognize the pattern of the features of the video sample. The extracted features are used to finally predict the quality of the video.After the measurement result comes out, calculate the difference from the actual MOS and modify the deep learning network to reduce this error. By repeating this process, the deep learning network learns to measure the quality value of that video with minimal error.After enough training, VQML can predict a video quality score that is very close to the actual MOS value.Input & Output ValueVQML only requires the video itself to predict the video quality score. When the video is input into the VQML neural network, VQML automatically decodes it and converts it into frames, which are sets of pixel values expressed in RGB, and extracts features from the frames to compute the quality score of the video as the output of the model. VQML can execute this process in speed that is close to real-time.By default, the quality score of the video is output after the entire video is processed through VQML. There is an option provided to adjust the video viewing time in order to predict the quality score for specific segments within the video.Dataset Utilized in VQMLDeep learning method has recently been applied to various applications. A highly acknowledged database is required to ensure reliable performance of such deep learning-based solutions.The KoNViD-1k database (http://database.mmsp-kn.de/konvid-1k-database.html) used for VQML’s learning is a highly reliable database consisting of 1,200 videos evaluated by more than 100 people. It has been cited more than 130 times in papers around the world since the VQA group at the University of Konstanz, Germany, published it at the IEEE QoMEX 2017 academic conference, and is widely accepted by academia as a database for video quality assessment.Another learning database, the YouTube-UGC dataset, contains user-generated content collected from YouTube, providing videos of different resolutions and formats. Some of them have 4K UHD (Ultra High Definition) characteristics, which are useful resources for research and development activities and are widely sed for quality evaluation and content classification.The KoNViD-150k database (http://database.mmsp-kn.de/konvid-150k-vqa-database.html), released in 2021, also consist of KoNViD-150k-A set (152,265 videos evaluated by 5 people) and KoNViD-150k-B set (1,577 videos evaluated by more than 89 people), ensuring high reliability. The VQA group at the University of Konstanz says this database can be used for efficient video quality tests. Besides the training datasets from KoNViD-1k, VQML utilizes the KoNViD-150k-B set database for objective performance testing.This allows the deep learning network in VQML to have high accuracy and perform video quality prediction that reflect patterns in real-world communication environment.Outstanding Points of VQMLVQML is also unique in the configuration of deep learning networks.VQA solutions based on deep learning often include transfer learning using pre-trained CNN modules.Most solutions utilize a pretrained CNN module followed by a recurrent module such as a LSTM or GRU module. In such structure, the CNN module is pre-trained using the Image Net database (https://www.image-net.org/download.php) consisting of more than 1 million images built by Professor Li-Fei-Fei at Stanford University. The model’s learning process based on the training database only occurs throughout the following recurrent module.In contrast, VQML consists of 2 CNN modules and 1 GRU module. Each CNN module is pre-trained with the ImageNet database and the KonIQ-10k database (http://database.mmsp-kn.de/koniq-10k-database.html) which is a set of over 10,000 images produced by the VQA group of the Konstanz University. Then a GRU module continues to train the model with the features extracted from the 2 previous CNN modules. So the actual learning based on the training data occurs through all 3 modules. In order to mimic a unique characteristic of the human visual system, a temporal pooling layer is added. This layer takes into account how humans perceive an instant degradation in video quality due to a sudden drop in the streamed video much greater than it actually is, and result in a stronger degradation in overall video quality.In conclusion, VQML is able to extract highly detailed contextual as well as temporal characteristics of the assessed videos better than the other VQA solutions.High Performance of VQMLFigure 9 is a verification graph of VQML’s performance based on the KoNViD-1k dataset.Figure 9. Correlation with Ref-MOS and VQMLThis is the correlation between the actual viewers’ MOS and the quality score predicted by VQML. The measurement results are very close to the actual MOS values forming almost a straight line. The graph shows a correlation of about 86.5%, which proves the VQML is a reliable solution.Also, the MAE (Mean Absolute Error) of VQML is 0.235, which is smaller than the average MAE of other products currently commercially available. This means that the output of VQML’s prediction can be as accurate as the human perception on judging the quality of the Video.Verify Reliability of VQMLTo verify the reliability of VQML, tests were conducted using various types of videos. Each video has its own characteristics that represent the video quality in a specific environment. TypeFeatureDramaGeneral screenMovieRelatively dark screenSports Lots of movement and bright lightAnimation Artificial color that stands out Figure 10. Video types and features used in tests After distorting the indicators of each video into several units using the FFmpeg codec, the MOS value of the video to which the distorted indicators were applied was measured. Blockiness Option Change the degree of block generation by adjusting the bitrate with b:v option Unit 10000k / 5000k / 1000k / 500k / 300k/ 200k / 100k BlurOptionUse the boxblur option to adjust the degree of blur in the videoUnit0.0 / 2.5 / 5.0 / 7.5 / 10.0 / 20.0 / 30.0 BrightnessOptionChange brightness by adjusting brightness option of Eq AVOptionsUnit-1.0 / -0.75 / -0.5 / -0.25 / 0 / 0.25 / 0.5 / 0.75 / 1.0 ColorfulnessOptionColor distortion by adjusting the saturation option of Eq AVOptionsUnit0.0 / 0.5 / 1.0 / 1.5 / 2.0 / 2.5 / 3.0ResolutionOptionChange the resolution with the scale optionUnit2160p / 1440p / 1080p / 720p / 480p / 360p / 240p / 144pFigure 11. Encoding options, units and examples applied to video indicatorsThe measurements made using the VQML algorithm are as follows. ?Blockiness : The higher the bitrate, the higher the MOS value.?Blur : The cleaner the video, the higher the MOS value.?Brightness : The lower the brightness, the lower the MOS value. ?Colorfulness : The lower or higher the color distortion intensity, the lower the MOS value. ?Contrast : The lower the contrast, the lower the MOS value. ?Resolution : The higher the resolution, the higher the MOS value. Figure 12. MOS value measurement results for distorted indicator-specific videoColorfulnessOptionColor distortion by adjusting the saturation option of Eq AVOptionsUnit0.0 / 0.5 / 1.0 / 1.5 / 2.0 / 2.5 / 3.0ContrastOptionChange the contrast by adjusting the contrast option of Eq AVOptionsUnit0.0 / 0.5 / 1.0 / 1.5 / 2.0 / 2.5 / 3.0ContrastOptionChange the resolution with the scale optionUnit2160p / 1440p / 1080p / 720p / 480p / 360p / 240p / 144pFigure 11. Encoding options, units and examples applied to video indicatorsThe measurements made using the VQML algorithm are as follows.Blockiness : The higher the bitrate, the higher the MOS value.Blur : The cleaner the video, the higher the MOS value.Brightness : The lower the brightness, the lower the MOS value. Colorfulness : The lower or higher the color distortion intensity, the lower the MOS value. Contrast : The lower the contrast, the lower the MOS value. Resolution : The higher the resolution, the higher the MOS value. Figure 12. MOS value measurement results for distorted indicator-specific videoWhen the MOS value according to the change of each encoding option was compared with the original video, the original recorded the highest score. In other words, as the degree of distortion increases, the video quality decreases, and the MOS value measured by VQML tends to decrease.These results are interpreted as important indicators showing that VQML is a reliable tool for video quality evaluation. As such, VQML, an original video quality assessment solution unique to Innowireless, is expected to be sufficiently competitive in the market.
May 17, 2024
-
- KCA′s Private 5G Site Regulating with XCAT-IXA and XCAL-AIR
- Jessica Jiyung OhAs businesses across industries embrace digital transformation and seek enhanced connectivity, private 5G networks are emerging as a powerful solution to address their unique needs. The current market size of Private 5G networks is projected to witness a remarkable growth trajectory, with a CAGR of 42.3% to 51.2% expected until 2030. Private 5G networks offer enhanced security, ultra-low latency, and guaranteed bandwidth, making them ideal for mission-critical applications in industries such as manufacturing, healthcare, and transportation. Unlike public networks, private 5G networks can be customized to align precisely with specific business requirements and use cases, optimizing resource allocation and performance.The convergence of private 5G with IoT devices, edge computing, and AI is unlocking unprecedented possibilities for automation, real-time data analytics, and predictive maintenance. [Regulatory Landscape of Private 5G in Korea] Regulatory frameworks for Private 5G networks vary across regions. Within KCA\'s multifaceted role in shaping the Korean private 5G market, \"Developing regulations and guidelines\" serves as a critical cornerstone. Key areas of regulation- Spectrum allocation: KCA defines how the limited radio spectrum is allocated for private 5G networks. This involves establishing licensing procedures, determining eligibility criteria for different types of users, and setting spectrum usage fees.- Technical standards: KCA sets technical standards for network equipment, software, and services to ensure compatibility, interoperability, and network security. This includes defining minimum performance requirements, data encryption protocols, and network management procedures.- Security and privacy: KCA establishes robust security and privacy regulations to protect sensitive data within private 5G networks. This includes data breach notification requirements, user authentication protocols, and data retention policies.- Quality of service (QoS): KCA defines minimum QoS requirements for private 5G networks to ensure reliable and consistent connectivity for critical applications. This involves setting metrics for latency, bandwidth, and packet loss.- Interference prevention: KCA takes measures to prevent interference between private 5G networks and public networks or other radio services. This may involve setting geographic separation requirements or defining power limits for network transmissions. [Accuver and KCA Collaborate to Pave the Way for Private 5G Networks]Private 5G networks are booming in smart factories, but signal interference between nearby businesses can be a problem. To ensure smooth operation and regulatory compliance, KCA (Korea Communications Agency) needed a solution for on-site inspections. Their plan consisted of three key parts:- Building a database: This database would store inspection results and statuses for different scenarios, helping KCA track compliance and identify potential issues.- Developing a dedicated inspection program: KCA wanted a program specifically designed for private 5G networks, including wireless station verification and signal strength/interference measurements.- Defining evaluation criteria: Clear standards were needed to assess the results of inspections and ensure consistent enforcement.To tackle this challenge, KCA partnered with Accuver, a technology company specializing in network testing solutions. Together, they created a comprehensive inspection protocol in 2023:Wireless station verification: Accuver\'s portable analyzer (XCAT-IXA) verified the proper configuration of network equipment.Indoor/outdoor signal measurements: XCAT-IXA was used for walking measurements, both inside and outside buildings, to assess signal strength and coverage. Outdoor interference in high-rise buildings: For scenarios where walking measurements weren\'t feasible, Accuver used a drone-based solution (XCAL-Air) in conjunction with XCAT-IXA to measure interference levels more effectively.After six months of collaboration, the efficient inspection program is ready. KCA will now report it to the Korean Ministry of Science and ICT for approval. This new program will help ensure responsible development and efficient operation of private 5G networks in Korea, ultimately benefiting businesses and consumers alike. [XCAL-Air for Outdoor interference measurement] XCAL-Air is a powerful, integrated drone-based package specifically designed for measuring and analyzing performance in airspace networks. This advanced solution combines various network equipment, including spectrum analyzers, scanners, and mobile network measurement devices, boasting a comprehensive feature set for efficient network verification in airspace environments. Thanks to Accuver\'s innovative tools, KCA can obtain precise 5G signal strengthmeasurements in the air. XCAT-IXA spectrum analyzer checks signal strength and transmits the data to the cloud. This enables them to view real-time signal strength, such as RSRP from Private 5G cells, on a 3D map, making it easy to identify any weak spots or interference. According to KCA rules, areas with an RSRP stronger than -115dBm outside the designated zone require adjustment. In the future, KCA plans to create a database of on-site inspection data for private 5G sites and manage it continuously for supervision. [Conclusion]The convergence of Private 5G with IoT devices, edge computing, and AI opens unprecedented possibilities for automation, real-time data analytics, and predictive maintenance. The role of entities like the Korea Communications Commission (KCA) becomes crucial in shaping regulations and guidelines to facilitate the seamless integration of private 5G networks.Accuver\'s collaboration with KCA showcases a proactive approach to addressing challenges in private 5G deployment. By developing a comprehensive on-site inspection protocol, including wireless station verification, signal strength/interference measurements, and interference prevention, the collaboration ensures smooth operation, regulatory compliance, and efficient management of private 5G networks in smart factories. The introduction of XCAL-Air as a drone-based solution for outdoor interference measurement further enhances the capability to identify and resolve coverage in airspace networks. Accuver\'s innovative tools, such as the XCAT-IXA spectrum analyzer, enable precise 5G signal strength measurements. The integration of these tools aligns with KCA\'s objective to create a database of on-site inspection data for continuous supervision and management of private 5G sites. Cooperation between Accuver and KCA to revitalize the private 5G Network market will continue.Accuver plans to continue developing solutions for two key markets: regulatory agencies, like the FCC and Ofcom, who require private 5G on-site inspections (similar to KCA), and engineering companies who need private 5G completion inspections.
Jan 23, 2024
-
- Streamline the O-RAN IOT badge compliant test with AEGIS-O
- Jessica Jiyung Oh[O-RAN Certification and Badging Program] The O-RAN certification and badging program aims to minimize repetition of fundamental and common tests performed to verify and validate O-RAN based products and solutions before their deployment in operator networks. It defines and unifies processes, procedures, templates, data format, etc. which are provided to ensure sharing of the test results and repeatability of the executed tests. Currently, the O-RAN certificates, IOT badges and E2E badges have been defined for the following products (and interfaces). ● O-RAN certificates for- O-RU with Open Fronthaul interface (WG4)- O-DU (or combined O-DU/O-CU) with Open Fronthaul interface (WG4) ● O-RAN IOT badges for- O-RU and O-DU (or combined O-DU/O-CU) connected via Open Fronthaul interface(WG4)- eNB and en-gNB connected via X2 interface(WG5) - gNB-DU and gNB-CU connected via F1-C interface(WG5) - two gNBs connected via Xn-C interface (WG5)● O-RAN E2E badges for -O-RU, O-DU and O-CU (or their combinations) included in the E2E system or subsystem(TIFG)The O-RAN certification and badging refers to tests defined in O-RAN test specifications produced by related O-RAN Work/Focus Groups. Through O-RAN certification and badging program, any O-RAN vendor regardless of their size can have an opportunity to showcase their products and solutions, improve interoperability, and ultimately increase vendor diversity and supply chain resilience for operators embracing open RAN.O-RAN certification and badging can improve operator confidence in their chosen O-RAN based blueprint and can reduce the complexity and duration of pre-deployment testing.[AEGIS-O supports Automated Test and Report Development for O-RAN IOT Badging(WG5)] AEGIS-O introduces a new feature that provides automated testing functionality for O-RAN IOT badges, encompassing all test cases defined in the O-RAN Open F1/X2/Xn Interface, as outlined by the Interoperability Test Specification of the O-RAN Alliance Working Group (WG5). Operators can easily examine the compatibility of multi-vendor systems based on the O-RAN Test Specification and precisely analyze interoperability problems. Accuver (Innowireless) is an active member of the O-RAN Alliance. AEGIS-O was developed to automatically perform and report the same tests conducted by OTIC. Operators and vendors can independently conduct the same level of interoperability tests as OTIC using AEGIS-O. AEGIS-O not only executes tests defined in the IOT Badge specification but also provides features to analyze each test case in detail, such as the Packet Viewer, CDR Viewer, and Detail Report. Users can access statistical information about all test cases and investigate each test case for troubleshooting. This helps both operator and vendor communities prepare for OTIC tests and ensures confidence in O-RAN based products and solutions. Vendors can enhance interoperability and prepare for OTIC certification tests by proactively preparing the test cases defined in the standard. By conducting tests with AEGIS-O, operators can ensure interoperability at the OTIC level when configuring systems with products from different vendors. This reduces testing efforts for network operators and promotes vendors\' O-RAN based products and solutions, creating an opportunity for interoperability among different vendors and gaining acceptance from others. [AEGIS-O O-RAN IOT Badging( WG5) Test Procedure] AEGIS-O provides an automated testing feature for eNB and gNB connected via the X2 interface, gNB-DU and gNB-CU connected via the F1-C interface, and two gNBs connected via the Xn-C interface. It supports all the test cases specified in the O-RAN Open F1/X2/Xn Interface Working Group Interoperability Test Specification. In AEGIS-O, once the user selects the interface and test case and initiates the test, the results are assessed according to the criteria established within the O-RAN ALLIANCE framework. AEGIS-O offers Badge Reports that present the test results, and users can comprehensively investigate individual test cases using tools such as the Packet Viewer, CDR Viewer, and Detail Report. Midhaul traffic is tapped and flow into AEGSI-O by capture card and Packet Broker. When the user clicks WG5.IOT on the menu screen of AEGIS-O and selects the interface to be tested the Badge Setting screen is displayed. On the Badge Setting window, users can select test cases and configure profile information such as DUT information and network profile, etc. The Badge Status window shows the status of all scheduled test cases. Upon completion of each test case run, the test result is immediately displayed. Users have the option to save or submit results by exporting both the Badge Report and Detail Report. AEGIS-O’s Badge report form aligns with the O-RAN Alliance Badge program’s report structure, enabling vendors to prepare OTIC Tests, and allowing operators to easily compare it with other Badge reports. Users can also investigate packets and CDRs related to the particular test case if needed. In AEGIS-O, O-RAN WG5.IOT automated tests make it possible for operators to quickly and easily detect and analyze IOT problems with a real-time packet analysis feature. This approach offers a cost-effective way to verify the interoperability of operators\' O-RAN systems, which consist of products from multiple vendors. References [1] Overview of Open Testing and Integration Centre (OTIC) and O-RAN Certification and Badging Program: O-RAN ALLIANCE Test and Integration Focus Group, White Paper, April 2023
Oct 30, 2023
-
- Self-Attention-based Uplink Radio Resource Prediction in 5G Dual Connectivity
- Sungkyunkwan University conducted this study with the support of Accuver (XCAL-Solo III). They have proposed a self-attention-based deep learning model to predict uplink radio resource in 5G Dual Connectivity (5G DC). The model was trained on commercial 5G DC traffic data from three major carriers in South Korea and achieved an average prediction accuracy of 95.08% under various mobility and cell-load conditions.
Oct 27, 2023
-
- Tackling Beamforming Test Hurdles with XCAT-MAIS′s mMIMO Expertise
- By Dr. Joseph Lee [The definition and value of beamforming]Beamforming is a transmission technique that uses multiple array antennas to concentrate the power of a transmitted RF signal onto a particular user. A digital signal processing algorithm is used to apply relative amplitude and phase shifts to antenna elements in such a way that the signals join together towards a user’s location and cancel each other out in the opposite direction [1]. Figure 1. Massive MIMO antenna performing 3D (azimuth and elevation) beamforming. Beamforming plays a key role in performing multi-user multiple input, multiple output (MU-MIMO), in which concurrent users share time and frequency resources to dramatically increase the capacity of a channel. In a Massive MIMO antenna array, a large number of antenna elements (i.e., 64T/64R) serve users in different locations using beamforming. Using RF environments constructed for users that 2 employ signal concentration and canceling, multiple users achieve higher throughput at the same time and frequency. Together with higher carrier aggregation, higher-order modulation and enhanced use of unlicensed spectrum, beamforming/Massive MIMO can deliver Gigabit throughput in LTE-Advanced and 5G networks [2]. In urban, high data traffic environments where spectrum is limited, Massive MIMO’s beamforming technique is a significant advantage. By accommodating more concurrent users and giving them a better quality of service, beamforming allows mobile operators to attain a better return from their capital expenditure under limited resources. [How beamforming is performed between the base station and UE]There are two types of beam: a common beam used for the initial connection, and a dedicated beam used once that connection is established. Since a base station cannot locate a User Equipment (UE) at the time of its connection, the base station secures coverage through a time-sweep, with beams uniformly divided over space. Each beam contains a unique Synchronization Signal Block Beam ID (SSB ID) and Physical Beam Index (PBI). The UE informs the base station of its status, including SSB IDs and power information, in real time. Once the UE is connected to the base station, a dedicated beam for data transmission can be concentrated on the UE. The base station tracks the UE by measuring an uplink channel via a Sounding Reference Signal (SRS) transmitted from the UE. Then, the base station applies the uplink channel value to downlink channel value by using TDD channel reciprocity. Now, beamforming coefficients for each antenna can be calculated using channel value and a linear algebraic zero-forcing method, which helps minimize interference from other antennas. Multi-user MIMO is now performed through beamforming gain and interference rejection gain.[Beamforming KPIs] The list below highlights beamforming key performance indicators (KPIs) and illustrates why they are important to measure and analyze. The goalㅡTo verify whether throughput matches channel capacity, and if not, why. KPIs related to SSB/Common Beam ㆍNumber of detected SSBs A maximum of 64 SSBs can be transmitted. A base station can measure and estimate the location and direction of a UE using SSB IDsㅡor PBIsㅡthat it receives and reports. ㆍPBI and RSRP/RSRQ PBIs and associated power information received by a UE can be used to predict the handover situation of the UE. ㆍDMRS SINR Demodulation Reference Signal (DMRS) SINR can be used to estimate and receive channel characteristics. ㆍBeamforming gain This parameter can be used to estimate the efficiency of beamforming at the time of measurement. KPIs related to User/Dedicated-beam ㆍDMRS SINR Since multiple orthogonal DMRS signals can be allocated and beamformed to support MIMO transmission, DMRS SINR be used to estimate the degree of interference in each user signal in the corresponding physical channel. ㆍBeamforming Gain and Interference Rejection Gain Analyzing these KPIs can help estimate the performance of both channel estimation and interference rejection. [The challenges of measuring antenna beamforming performance]One way to measure a Massive MIMO’s beamforming performance is through field testing. Testing must be conducted among a representative number of UEs located in the sector (e.g. 16 UEs) and the UEs must be in different positions, i.e., not bunched together in a single location or line-of-sight spot. The UEs also need to support Transmission Mode 8 or 9 (TM8 or TM9) to report beam-specific KPIs. The idea is to measure how much gain a Massive MIMO antenna provides compared with a conventional antenna system. Through beamforming, a Massive MIMO antenna should provide better signal quality to each user. Hence, user throughput and sector throughput should be higher. For efficient field testing, Accuver XCAL-Manager and XCAL-Solo tools can be used. XCAL-Manager is a cloud-based server that allows users to assign test cases remotely to test UEs and monitor their status and locations in real time. It also allows users to choose which UEs to aggregate, so that they can see sector throughput in addition to user throughput. Indeed, Signals Research performed a similar Massive MIMO test recently using Accuver XCAL-Manager and XCAL-Solo tools [3]. If one conducts Massive MIMO/beamforming tests repeatedly, field tests are time- and resourceconsuming. Moreover, field tests don’t offer a repetitive and consistent test environment to verify Massive MIMO’s beamforming performance, or identify problems caused by multiple sources. One way to perform the test in a simpler way is to bring the field RF environment into a lab and test it repeatedly using a channel emulator. A channel emulator is an instrument that takes RF inputs to generate RF outputs and combines internally-generated multi-path signals with individual amplitude and phase gain control, including propagation delay and path loss implementation. To measure beamforming performance accurately, a channel emulator should secure ‘channel reciprocity,’ in which a downlink channel is equal to an uplink channelㅡsimilar to a real, over-the-air channel. However, a channel emulator is an electronic instrument incorporating RF and digital circuits which would not usually satisfy the channel reciprocity requirement. Hence, to meet that requirement, the channel emulator must be equipped with an internal or external calibration function. Since by nature RF components’ characteristics drift with temperature, calibration should be performed on a regular basis. Therefore, it is critical that the calibration function does its job in a relatively short period of timeㅡlest too much time is wasted on calibrations. [Introducing Accuver XCAT-MAIS]The Accuver Massive MIMO Channel Emulator (MAIS) is the latest addition to the Radio Access Network (RAN) Testing portfolioㅡpart of Accuver’s Field-to-Lab (F2L) testing solutions. F2L products deliver quick, cost-effective RAN software verification in real-world-like lab environments prior to field deployment. XCAT-MAIS is a channel emulator with M inputs to connect to BS antenna ports, and N outputs to connect to UE antenna ports. MAIS supports various M x N configurations, allowing users to test the performance ofㅡfor exampleㅡa 16 x 16 LTE FD-MIMO or a 64 x 64 5G Massive MIMO system. For RAN vendors, MAIS provides a repetitive, consistent test environment to verify Massive MIMO performance. For wireless operators, MAIS enables the comparison of FD/Massive MIMO performance from different RAN vendors in the same RF environment. Both these use cases are difficult to achieve with field testing due to variables present in the field environment. Figure 2. Accuver MAIS systemThe MAIS system supports a 100 MHz channel bandwidth for 5G testing, and frequencies of between 300 MHz to 6 GHz. It features a built-in calibration kit that is fast and accurate; and typically achieves the following specification within 15 minutes: |amplitude| < 0.35dB and |phase| < 3 degree; and it sustains the calibrated state for up to 72 hours. [XCAT-MAIS benefits]MAIS offers users the following benefits: Adjustable amplitude and phase rotation Monitoring of BS and UE outputs Support of ITU channel models and user-defined models Support for distributed, remote testing configuration Measurement of beam tracking performance. MAIS has signal capturing function at both the BS and UE sides to measure all beamforming KPIs listed above. Fast self-calibration ? typically within 15 minutes. Calibration hardware is included in MAIS to ensure it creates channel conditions properly and achieves channel reciprocity within the specified [XCAT-MAIS specifications] Channel bandwidth of up to 100 MHz Frequency: 300 MHz to 6 GHz RF port capacity: a single MAIS chassis can support up to 64 RF ports (16 slots, 4 RF ports per slot) Channel reciprocity (for TDD) with calibration tolerances of: o Amplitude = ±0.35 dB and phase ±3 degrees from any BS port to any UE port Support for DL 256QAM and UL 256 QAM, at a frequency of < 4 GHz Channel fading models: ITU Ped. A/B, Veh. A/B, EPA, EVA, ETU, 2D/3D SCM, HST, and user-defined Number of multipaths: 8 (expandable to 20) Doppler frequency of up to 1350 Hz (560km/h @ 2.6 GHz) AWGN built-in at each RF port[Signal and beamforming analysis with XCAT-MAIS]MAIS supports the following analysis: 1. Signal analysis on: a. Each RF port b. gNB and UE c. Relative power, OFDM, OBW and I/Q samples 2. Beamforming analysis on: a. RF ports b. SSB/common beam i. PCI, PBI, power, correlation accuracy, EVM, SINR ii. Relative phase-consistency c. PDSCH/dedicated-beam i. Relative phase-consistency ii. Beam-forming accuracy [Conclusion]Wireless technology is changing fast every day, and the complexity of testing and validating its features also grows exponentially. Such is the case with validating the performance of beamforming and Massive MIMO, two important features of 5G. Massive MIMO’s beamforming performance can be measured using field testing but doing so is very time- and resource-consuming. Moreover, the field doesn’t offer a repetitive and consistent test environment to troubleshoot problems that may be caused by multiple sources. Accuver MAIS (Massive MIMO Channel Emulator) provides a repetitive and consistent real-world-like test environment to verify and compare Massive MIMO product performance. MAIS system supports 100 MHz channel bandwidth for 5G testing and frequencies from 300 MHz to 6 GHz. It has a built-in calibration kit that is fast (within 15 minutes) and highly accurate ㅡ ensuring that little time is wasted in calibration ? so that users can get on with Massive MIMO testing quickly. References Masterson, C. (2017, June). Massive MIMO and beamforming: The signal processing behind the 5G buzzwords. Analog Dialogue, 51. Qualcomm. (2017, February). The essential role of Gigabit LTE & LTE Advanced Pro in a 5G World. Thelander, M. (2018, November). The matrix: Quantifying the benefits of 64T64R Massive MIMO with beamforming and multi-user MIMO capabilities. Signals Ahead, 14, no. 9
Oct 25, 2023
-
- [Signals Flash] WE′VE GOT YOU COVERED (TO VARYING DEGREES)
- SRG(Signal Research Group) did this study with the support of Accuver Americas (XCAL5 and XCAP). SRG used a Galaxy S23 smartphone to test the downlink performance in a cluster of 10 Gbps cell sites that had 1x180 MHz of Band n41, 2x20 MHz of Band n25, and 2x20 MHz of Band n71. We primarily did drive tests along the rural roads as well as in the suburban neighborhoods, which were ideal for a fixed wireless access (FWA) service offering.For more detailed information, please refer to attachment.
Oct 19, 2023
-
- Accuver Benchmarking Test Solutions
- As the advancement of 5G technology leads to increased mobile data usage, the necessity for benchmarking tests is growing in order to assess and compare network performance effectively. With over 20 years of experience in network optimization, Accuver offers a comprehensive solution to evaluate and compare mobile network performance, coverage, and service quality from the user\'s perspective. ▶ Click to get full version of Accuver Benchmarking Test Solutions
Aug 29, 2023
-
- [Signals Flash] IT′S GETTING HOT N HERE, SO...
- SRG(Signal Research Group) did this study with the support of Accuver Americas (XCAL5/XCAL-Solo and XCAP). SRG just completed its 32nd 5G benchmark study. For this endeavor SRG collaborated with Accuver Americas to conduct an independentbenchmark study of 5G mmWave 4 component carrier (4CC) uplink performance, using AT&T’s commercial network in Glendale, AZ.For more detailed information, please refer to attachment.
Jul 10, 2023