Fast Phase Recognition of Mechanical Helical Phased Array Antenna Element Based on Line-Scan Machine Vision

Mechanical helical phased array antenna (HPAA) realizes microwave beam directing by mechanically rotating the helical antenna element. Measuring and calibrating the phase of the HPAA, which can be simplified to measure and calibrate the phase of each helical antenna element, is significantly important to evaluate the antenna performance. To realize fast phase measurement and calibration of element antenna, this article proposed a line-scan machine vision (MV) strategy. A prototype MV system combining a line-scan camera and a guide rail structure to acquire the image of the helical antenna element is designed and proposed. Phase recognition algorithm based on deep-learning you only look once v8 (YOLOv8) and pixel difference networks (PiDiNets) with field-of-view-error (FOVE) minimization is proposed to conduct element detection and edge detection and further to determine the angular state that quantifies the phase of the helical antenna element. Statistical analysis of experimental results on a $5 \times 8$ test antenna array shows that ±1° phase recognition accuracy can be achieved within 43 s to demonstrate the accuracy and efficiency of our proposed method. Moreover, the measurement uncertainty of the proposed approach is analyzed that can be quantified with a normal distribution to show 83% of recognized result errors fall with 1° to demonstrate its stability and reliability.


Fast Phase Recognition of Mechanical Helical Phased Array Antenna Element Based on Line-Scan Machine Vision
Chang Guo , Song Qiu , Member, IEEE, Tailai Ni , Bangji Wang , and Qingxiang Liu Abstract-Mechanical helical phased array antenna (HPAA) realizes microwave beam directing by mechanically rotating the helical antenna element.Measuring and calibrating the phase of the HPAA, which can be simplified to measure and calibrate the phase of each helical antenna element, is significantly important to evaluate the antenna performance.To realize fast phase measurement and calibration of element antenna, this article proposed a line-scan machine vision (MV) strategy.A prototype MV system combining a line-scan camera and a guide rail structure to acquire the image of the helical antenna element is designed and proposed.Phase recognition algorithm based on deep-learning you only look once v8 (YOLOv8) and pixel difference networks (PiDiNets) with field-of-view-error (FOVE) minimization is proposed to conduct element detection and edge detection and further to determine the angular state that quantifies the phase of the helical antenna element.Statistical analysis of experimental results on a 5 × 8 test antenna array shows that ±1 • phase recognition accuracy can be achieved within 43 s to demonstrate the accuracy and efficiency of our proposed method.Moreover, the measurement uncertainty of the proposed approach is analyzed that can be quantified with a normal distribution to show 83% of recognized result errors fall with 1 • to demonstrate its stability and reliability.

I. INTRODUCTION
P HASED array antenna (PAA) is a configuration of mul- tiple antenna elements that enables beam synthesis and control by manipulating the phase relationship between each port.It finds applications in radar systems [1], communication networks [2], electronic warfare systems [3], and other domains that require beam scanning, high antenna gain, and multibeam forming [4].PAA achieves these capabilities by precisely controlling the amplitude and phase of each antenna element and manipulating the overall phase distribution.Various methods for phase shifting exist, including electronic The authors are with the School of Physical Science and Technology, Southwest Jiaotong University, Chengdu 610031, China (e-mail: song.qiu@swjtu.edu.cn).
Digital Object Identifier 10.1109/TIM.2023.3329160and mechanical control.Electronic phase shifting utilizes circuitry or dedicated devices [5], [6], while mechanical phase shifting normally involves a mechanism that can physically alter antenna element positions.With the demands of higher power density distributed from PAA, an example is using the helical antennas, to form the helical PAA (HPAA) that has excellent power density characteristics [7] and accurate pointto-point scanning by altering elements with certain angle: 1 • rotation corresponds to a 1 • phase change [8].
In such mechanical approach mentioned in [8] and [9], a mechanical controller, that is, a dc motor, is used to drive the helical antenna element.Thus, in the mechanical HPAA, mechanical controllers are utilized to realize the rotation of the antenna element and further conducting phased antenna phase altering.Based on the inherent principles of mechanical HPAA, it is essential to make sure the antenna element rotates accurately and timely.Inaccurate rotation of the antenna elements will significantly affect the performance of the mechanical HPAA, for example, power distribution, phase scanning, and beam pointing [10], [11].To address such issues in mechanical HPAA, phase measurement and calibration [12] are necessary to be conducted at the antenna manufacturing and operation stages [13].
The conventional methods to calibrate the phase of the HPAA system are categorized in Fig. 1: far-field test and near-field test [14], respectively.The far-field test is considered an outdoor test field, which is typically an open-air far-field, presenting complex testing conditions, for example, ground reflections and environmental influences.In addition, a test tower is required for relevant testing, and the test results can be significantly affected by the environment conditions.On the other hand, the near-field test is normally conducted in a 1557-9662 © 2023 IEEE.Personal use is permitted, but republication/redistribution requires IEEE permission.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.wave-absorbing environment in a dark room, which is called an anechoic chamber.The test results of antenna performance parameters can be more accurate, but the drawbacks are less cost-effective and more time-consuming.
A. Related Works 1) Calibration of the HPAA: Antenna calibration can be conducted using two approaches: individual element calibration or array-wide calibration.Liu and Yang [15] claimed that conventional methods of phase measurement and calibration require a significant number of scanning points in the near-field and a large-scale measurement setup in the far-field.To make the tests faster and more efficient, a compact antenna test range (CATR) conversion measurement system was proposed.This system was able to convert the spherical wave generated at a short distance from the antenna into a quasiplane wave, effectively obtaining an area that meets the requirements for antenna measurements.Although the proposed system was to combine far-field testing with near-field testing for effective measurement, it was complicated and expensive to setup the system.[16] employed a probing technique to periodically scan the antenna array.They activated each element antenna separately and then measured the phase and amplitude offset between all elements.However, it was time-consuming for measurement and calibration.
Mechanical HPAA, unlike electronic PAA, introduces additional errors due to the mechanical rotation error of individual helical antenna elements [17].Therefore, the calibration of HPAA needs to be performed more frequently.Particularly in industry, a standard line-ruler that matches the size of the array is normally considered.By rotating each helical antenna until the end of the antenna is in contact with the standard ruler since antenna rotation is limited by the ruler, it is ensured that each column of antennas is located at the same initial angle.The line-ruler method is illustrated in Fig. 2.This calibration method is cumbersome and has limitations in terms of accuracy and efficiency, especially when dealing with a large dimension HPAA.
2) Phase Recognition Based on MV: The helical antenna array is composed of helical antenna elements.Fig. 3 illustrates a common type of helical antenna unit used in the array.The phase of the HPAA is determined by the phase of each helical antenna element [7], that is, determined by the angular state of the antenna.Therefore, it is promising to find a way to measure the angular state of the helical antenna for fast phase recognition and further calibrate the antenna.
Machine vision (MV) has been emerging in measurement and calibration and it is an important branch in the field of computer science and artificial intelligence, encompassing diverse technologies, hardware, and software.It finds extensive applications in various tasks including visual classification [18], industrial inspection [19], autonomous driving, robot application [20], [21], industrial high-precision detection [22], [23], and so on.
3) Machine Vision: MV is a combination of three processes [24]: image capturing, digital image processing, and machine learning.With the development of high-performance computing power, MV for measurement and calibration has already entered the era of deep learning [25] with two essential stages, namely, object detection and edge detection.
Object detection is one of the most prominent tasks in the field of MV and it has gained significant attention in computer vision and image processing due to the continuous development of deep learning [26].In the deep-learning era, object detection can be grouped into two genres: "twostage detection" and "one-stage detection" [27].The two-stage detection algorithm will get object candidate boxes as the first stage and then classify the suggestion box as the second stage, the typical representative of this is region-based convolutional neural networks (R-CNNs) series [28].The one-stage detection algorithm will directly generate the category probability and position coordinate value of the object without candidate boxes, which can greatly improve the detection speed of the algorithm.The representative of the one-stage detection algorithm is you only look once (YOLO) [29] series and a single-shot multibox detector (SSD) [30].Among these algorithms, YOLO series has been glowing in various fields with its high real-time and lightweight characteristics: garbage classification [31], pedestrian target detection [32], ship detection [33], remote-sensing image object detection [34], flying birds in airport monitoring [35], drone detection [36], and so on.Particularly, YOLOv8, shown in Fig. 4(a), the latest iteration in the YOLO series, has indeed made significant advancements in terms of speed, accuracy, and ease of deployment as a stateof-the-art (SOTA) model [37].
Edge information is a fundamental feature in an image, and the detection and extraction of edges have become crucial research topics in the field of image processing.Traditional edge detection methods primarily rely on calculating image gradients and designing specific filters to extract edge information such as Robert, Sobel, and Canny operators.With the advancements in computer technology, particularly the emergence of convolutions neural networks (CNNs), deep-learning approaches have gained prominence in edge detection.DeepContour [38] model used CNNs to learn edge information in images in 2015 and achieved good results on BSD datasets.Holistic-nested edge detection (HED) [39] model further proposed a multiscale and multilevel edge detection network in 2016 and realized more precise and accurate edge detection by combining the features of different levels.Pixel difference networks (PiDiNets) were proposed in ICCV 2021 [40], which is a simple neural network shown in Fig. 4(b), lightweight but effective architecture for effective edge detection.By combining the strengths of traditional feature descriptor techniques with the power of CNN convolution, PiDiNet aims to enhance the encoding and understanding of image content, enabling more efficient and accurate edge detection [40].
To the best of our knowledge, there is little work to apply MV for phase recognition of helical antennae.It is promising to consider utilizing the MV system and developing associated algorithms to realize the SOTA fast phase recognition of mechanical HPAA.

B. Contribution and Organization
Inspired by the development of MV technology and to address the demands of fast phase recognition of mechanical HPAA elements, a line-scan MV approach is proposed to achieve the goal.The proposed approach is different from the conventional ones in that instead of utilizing a standard line-ruler to manually measure and then calibrate the antenna element, MV-powered automotive mechanical guide-rail with a line-scan camera system can capture the phase angle of the antenna element in the HPAA to realize precise phase measurement of helical antennas and to enable significant enhancement of the calibration accuracy and efficiency of mechanical HPAA.The main contributions of this article are: 1) a phase recognition method for element helical antenna based on MV combined YOLOv8 and PiDiNet is proposed; 2) an MV system for HPAA phase recognition combined with the line-scan camera and guide-rail system is designed and built; 3) the performance of the proposed phase recognition system is statistically examined to demonstrate the excellence of efficiency, accuracy, and stability; and 4) a dataset of the helical antenna elements is made publicly available for supporting the field of mechanical HPAA calibration.
The rest of the article is organized as follows.In Section II, our proposed MV system and image algorithm are introduced and explained.In Section III, the experimental platform setup and system performance evaluation processes are described.In Section IV, the experimental results are analyzed to show the feasibility and applicability of our proposed system.Finally, in Section V, the article is summarized and concluded, with a discussion of the limitations and future prospects of the proposed method for outlook.

II. PROPOSED MV SYSTEM AND IMAGE PROCESS ALGORITHM
To determine the angular state of the helical antenna element, three fundamental stages of MV are considered.
1) Object Detection: To detect the helical antenna, its center, and end.2) Edge Detection: To detect the edge of the center and end to help find the arc center of themselves, and to determine the angular state.3) Angle Calculation: To find the arc center from the geometry center and end of the antenna element, and to address the field of view error (FOVE) to calculate the angular state.In this section, an image acquisition system combined with a guide rail and line-scan camera will be presented in detail.Furthermore, the process of image data of the helical antenna element using YOLO and PiDi networks will be explained.In addition, the FOVE correction is considered to minimize the angular state computation error caused by the view position of the line-scan camera.The image acquisition system proposed is shown in Fig. 5, and the recognition process of the proposed strategy is shown in Fig. 6.

A. Image Acquisition System
The helical antenna element rotates around a perpendicular axis to the plane of the antenna array, shown in Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Fig. 3(a) and (b).The angle of the helical antenna is defined as the angle between the line that connects the center of a circle between the geometry center and end of the antenna element and a reference line.The angular state to be determined is shown in Fig. 3(d).The reference line is usually the line where the geometry center of the antenna elements in the same row lies.Therefore, to recognize the angle of the antenna element, images should be acquired from the top view.Thus, an image acquisition system using a line-scan camera and guide rail is designed.
1) Line-Scan Camera: Line-scan camera is widely used in the field of image recognition, which is mainly used to capture and process images of high-speed moving objects quickly and accurately [41].The line-scan camera can reconstruct one line of pixels per exposure, which makes it unique for the antenna array to be scanned.The operation principle of the line camera is illustrated in Fig. 7.The tunnel light source shown in Fig. 8 is to provide uniform illumination by diffusing the emission of a high-angle LED through a curved reflective plate.It is chosen for the line-scan camera to suppress surface reflections from metallic materials made of antenna elements.
To ensure correct imaging and maintain the same horizontal and vertical resolution, the moving speed of the line-scan camera needs to be synchronized with the capturing frequency.This can be achieved by where FOV represents the size of the field of view, Resolution is the number of pixels per line of the linear camera, V 0 is the relative motion speed, and V c is the camera acquisition frequency of the line-scan camera.
2) Guide Rail Mechanism: Acquiring images from a line-scan camera needs relative movement between the antenna element and the camera, thus a high-precision moving guide rail mechanism is designed.The 2-D guide rail and the HPAA plate are integrated, which can support the camera to scan and move in a plane parallel to the array with high precision.
As shown in Fig. 5, by constructing a combination of three guide rails (X , Y 1 , and Y 2 ), and the image acquisition system fixed on it can complete coverage scanning on the plane parallel to the antenna array.At the same time, this 2-D moving plane design can complete the acquisition of the top-down image of the array with the line-scan camera.Furthermore, the guide rail integrated with the array itself, forming a unified system to make the calibration more convenient and faster.

B. Detect the Antenna With YOLOv8
1) YOLOv8 Dataset Creation: Since there is no publicly available dataset of helical antenna, the antenna dataset is collected and constructed in a test array through the image acquisition system and a mobile phone. 1) Image Collection: First, the helical antenna is captured using the line-scan camera through the array.Images are collected using the line-scan camera under six different light source illuminances as shown in Table I.This allowed us to capture the helical antenna images under different lighting conditions and increased the complexity of our dataset.Additionally, a mobile phone camera is employed to capture images from different angles.This further enhances the dataset's diversity and, consequently, improves the robustness of our model.A total of 600 images are captured using the linescan camera, with 100 images captured for each of the six different light source powers.The composition of the light source specifications is shown in Table I as well.
b) Image Annotation: In this task, the annotations can be categorized into three groups: helical antenna, center of helical antenna, and end of helical antenna.On average, each image contains 1-6 instances of helical antennas and their corresponding centers and ends.Finally, a total of 2739 helical antennas, 1749 helical antenna centers, and 1662 helical antenna ends are annotated in the dataset.
c) Data Augmentation: The purpose of data augmentation is to expand the dataset and improve the model's robustness [42].In this task, the main variations in this detection task involve changes in image brightness and the size of the antenna, and unlikely to encounter situations with overlapping antennas or similar occurrences.Thus, augmentation techniques such as translation and rotation which are consistent with the actual situation are used.As a result, the dataset of 700 photographs is augmented to a total of 2000 photographs, enabling a more comprehensive and robust training process.
2) YOLOv8 Model Scale Selection: YOLOv8 offers five model sizes for users to choose from, namely YOLOv8n (nano), YOLOv8s (small), YOLOv8m (medium), YOLOv8l (large), and YOLOv8x (extra-large).The number of parameters and network size of these models increase in sequence.Different network structures are mainly designed to adapt to learning and training in different scenarios.The antenna dataset proposed in this article is trained under five different size models of YOLOv8 and validated on the test set for     scaled to 640 × 640 pixels by YOLOv8s algorithm, the inference time is average at 7.9 ms, which is a fast inference speed as detection model.

C. Edge Detect the Antenna Within PiDiNet
PiDiNet is a supervised neural network as shown in Fig. 4(b), to learn deeper and better edge feature information of the dataset.The general training process of PiDiNet is as follows.
1) PiDiNet Dataset Creation: Labeling edge data is a more intricate task compared to objection detection labeling.This is because it necessitates precise labeling of edge features to enhance the regression capabilities of the neural network.
a) Image Collection: To gather a dataset specifically targeting the center and end of the helical antenna, images are extracted from the YOLO dataset used previously, where annotation or detection has already been performed.Therefore, this dataset exclusively comprises images that include the center and end of the antennas, and about 220 × 220 pixels, which generally falls within a rough range that aligns with the actual edge detect situation.
b) Image Annotation: The accuracy of edge labeling is crucial for effective network training.In the proposed approach, Krita, an open-source drawing software, is employed to import the extracted images of the antenna centers and ends.To facilitate comparison with the object elements, new layers are created and their saturation is reduced, aiding in the differentiation of the elements.Following this, various shades of gray are applied to fill different areas of the antenna elements, thereby enhancing visual contrast.
To further refine the images and mitigate undesired edge detection, smearing tools are employed.Subsequently, an analysis of the disparities between the two sides of the antenna is conducted.By generating the mask and applying the Canny edge detection operator from the OpenCV library, the resulting edge image is then used to label the edges of the antennas.
2) PiDiNet Training: The CARV4 parameters and CSAM and CDCM modules are used for model training.The combined use of this parameter has been proved to be the optimal scheme by the original author [40], and some training hyperparameters are shown in Table III.
3) PiDiNet Inference Effect: Inference and comparison of results are performed between the Canny operator and two distinct trained PiDiNet models: the pretrained PiDiNet on a public dataset and the PiDiNet trained on our dataset and shown in Fig. 9.The traditional algorithm Canny operator is easily affected by illumination and surface defects of antenna elements, and there are a lot of noises, which makes the normal boundary information easily submerged in these noises.PiDiNet trained on a public dataset does not perform well enough in our task.It is necessary to use a specific antenna center and end dataset for training.

D. Image-Processing Details
In this part, a detailed description of the image-processing techniques utilized in our practical application will be provided, including object recognition matching and image acquisition strategies.Furthermore, a thorough analysis of the camera's FOVEs across different planes and implemented necessary corrections are conducted to mitigate these errors.
1) Antenna Postrecognition Details: Object detection of YOLOv8 was not in a consistent order, which will cause us to be unable to match the antenna with its' center and end.To resolve this problem, a pairing process is implemented, which takes into account the com-positional relationship between the center, end, and their respective antennas based on the distance relationship.The pairing algorithm is explained in Algorithm 1. Fig. 10 illustrates the pairing decision process.When the decision x = q is "NO," it indicates the absence of a corresponding Bbox for matching, as the nearest match does not align with itself.Similarly, if the decision threshold is "NO," it suggests that this pairing does not meet the standard criteria, and this threshold may vary depending on the actual antenna size.
2) FOVE Correction: Inevitably, cameras without depth information introduce FOVE due to the imaging process of the equivalent lens.As mentioned in Section II, the line-scan camera captures images with pixels aligned in the direction of motion, which means that there is no visual field error in this direction as shown in Fig. 11(b).
The FOVE in a line-scan camera arises from the direction perpendicular to the motion direction.Fig. 11 illustrates the Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.FOVE of a line-scan camera when capturing a helical antenna, on the left side of the diagram illustrates the plane projection with different heights achieved through the equivalent lens to line-scan camera: The object is projected on CMOS through the equivalent lens, L represents the distance from lens to the plane of the antenna center, l represents the distance from lens to the plane of the antenna end, and h is the height difference between the center plane and end plane.Due to the different imaging planes (known as object distances) of the center and end of the antenna, there exists an error between their true projections and pixel projections.
As shown in Fig. 11(a), assuming that the antenna center plane (BE plane) serves as the imaging plane, then the actual projection of point C on the antenna end, which is from another plane should be D.However, due to the presence of FOVE, point C is instead projected to point E, resulting in an error represented by the line segment DE.This error represents the discrepancy between the true projection and the observed projection caused by the FOVE.Based on the triangle relation of lens imaging from different planes, (2) is derived In ( 2), the length AB is equal to L, which can be determined based on our working distance setting.CD refers to the height difference between the center and end, the length BE can be obtained from the pixel coordinates.The real coordinates BD could be solved from the (2), thus the FOVE: DE = BE-BD could be calculated, which enables us to perform field correction for planes with different heights.
Furthermore, it is crucial to note that the equation indicates that as the points move farther away from the center of the lens (FC), the FOVE (DE) becomes larger linearly.Therefore, to correct the error and minimize its range, it is desirable to bring the correction points as close as possible to the camera's imaging center.
When the imaging system moves along the direction of the antenna center, setting the imaging plane on the end plane of the antenna, while the plane with FOVE becomes the plane of the antenna center.This strategy significantly reduces the FOVE by reducing the distance from the error point to the lens center.The details of this strategy will be explained in Section II-D3.
3) Line-Scan Camera Imaging and Image Preprocessing: The line-scan camera utilizes the pixel row splicing imaging mode, which allows us to capture large arrays in a row-byrow manner.Taking into account the FOVE issue mentioned in Section II-D2, our line-scan camera moves along the center line of each row when capturing images as shown in Fig. 12(a).
Moreover, the images generated by the line-scan camera for each row are typically large and elongated, especially when capturing images of large arrays.These images cannot be directly processed by our algorithm.Therefore, cropping needs to be performed to generate an image with an appropriate pixel size for input to YOLO and PiDiNet.
However, image cropping may cause a certain antenna to be cut off, which requires that at least one antenna size should be left between two adjacent images.Therefore, d, shown in Fig. 12(c), is defined to correspond to the actual space, which is longer than the size of the antenna.
Additionally, in some cases, certain antennas may appear in overlapping regions between adjacent cropped images.To avoid redundant calculations and ensure accurate detection, Algorithm 2 is caught up to implement a process to remove duplicate antennas.This step involves recognizing and eliminating redundant antenna detections that occur in overlapping areas of the cropped images.Fig. 13 illustrates this process.When the distance between two boxes falls below a predefined threshold, it is deemed a duplicate box and can subsequently be removed.

III. EXPERIMENTAL SETUP
In this section, the setup of the experimental platform will be described, the experimental results will be analyzed, and the recognition system will be evaluated.

A. System Configuration
Based on the design, the image acquisition system is constructed as shown in Fig. 14.For the convenience and adjustability of testing, a small array with five rows and eight

B. System Performance Evaluation Procedure
To validate the feasibility of our approach and assess the recognition accuracy and stability of the algorithm and hardware system as a whole, numerous repetitive experiments are conducted on the test HPAA.
1) Stability: The stability of phase recognition is crucial for our application and can be evaluated based on two aspects: the consistency of repeated recognition within the same collected picture and the consistency of repeated recognition across  multiple pictures of the same HPAA in a fixed state.Additionally, the stability of the hardware system has been tested by repeatedly capturing images from the HPAA in the same state.
The test array consists of eight columns, with one column captured in each picture.To cover all 40 elements in the antenna array, a total of eight pictures are required in one state.In each state, 20 times of repeated image acquisition and three times of angular rotation are carried out, thus 640 pictures are finally collected with four groups representing four states.Within these 640 pictures, the phase information for the 40-element antennas in four states is obtained.For detailed analysis and results, refer to Section IV of our study.
By analyzing the stability of phase recognition in these situations, the stability and consistency of system performance are evaluated.
2) Accuracy: The accuracy of the proposed approach is evaluated in two aspects, namely, algorithm accuracy and system measurement accuracy.
a) Algorithm Accuracy: First, the recognition accuracy performance of YOLO and PiDiNet are evaluated by mAP50 and F-1 score.Mean average precision (mAP) is a commonly used metric for evaluating the performance of object detection algorithms [43].It represents the average of the average precision (AP) over all classes and images.mAP50 means that choosing the IoU ratio of 0.5 as the threshold value.The larger this value, the stricter our requirements for the detection effect.F-1, known as F-score or F-measurement, is a crucial performance metric used in the evaluation of edge detection algorithms [40].It provides a balanced measurement of the algorithm's precision and recall, which are both critical aspects Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. of edge detection.A higher score means that the effect of the edge detection effect is better.
b) System Measurement Accuracy: By rotating the antenna array angle through the control system and calculating the angle difference before and after the rotation verify the accuracy of the system.Indeed, the difference in angle values before and after rotation could reflect the accuracy of the recognition system due to the lack of a direct way to verify the angle of the helical antenna.The actual rotation angle of each antenna can be obtained from the returned value of their encoder.By comparing the calculated rotation angle with the difference in recognized angles before and after rotation, the accuracy of the recognition system can be indirectly assessed.Specifically, three angular rotations are carried out in our experiment, corresponding to the images in the four angle states mentioned earlier.By computing the discrepancies between state 1-state 2, state 2-state 3, and state 3-state 4, three groups of data that reflect the angle before and after rotation are obtained.This methodology indirectly elucidates the recognition accuracy and measurement uncertainty range of our system.Refer to Section IV for a comprehensive analysis.
The sample mean ( x) is used to evaluate the performance of a model based on our numerous repeated experiment data which represents the same state.The calculation method for the average value can be represented by (3), where x represents the mean value of the data, n represents the number of data points in the sample, and x i represents each individual data point.x reflects the average level of statistical data 3) Measurement Uncertainty: Measurement uncertainty analysis is a necessary evaluation process for a measurement system [44], [45], [46].The uncertainty sources of the proposed system mainly include: 1) YOLO uncertainty that the detection process may obtain incorrect objects; 2) PiDiNet uncertainty that the edge detection process may introduce differences in the center position after the Hoffman quasicircle; and 3) FOVE correction uncertainty that the calculation of FOVE may have deviation due to the installation tolerance or fabrication tolerance of the antenna element: Installation tolerance affects the height of the antenna element on the array while the fabrication tolerance affects the absolute horizontal distance difference between the center and the end.

IV. ANALYSIS A. Preliminary Experimental Result
Take one process of the angle calculation as an example, a partial recognition result is shown in Fig. 15.Given the substantial size of each image, three rows of antennas in a single column are given to show.Notably, only the middle row, as elucidated in Section II-D2, represents the final recorded angle due to its minimal error.
Fig. 15(a) shows the YOLOv8 algorithm detected the antenna, as well as its' center and end.PiDiNet detected the edge information of the antenna's center and end.To facilitate processing, edge thinning techniques are also applied to enhance edge clarity.

B. Recognition Stability
First, experiments are conducted to repeatedly recognize the same image, demonstrating the algorithm's consistent performance.When the image remains unchanged, the recognition results consistently remain the same, which proves the stability of our algorithm.
Subsequently, recognition is conducted on different images captured under the same angle states.Theoretically, despite being captured at different times, the recognition results for each state should be the same.However, due to slight variations in the imaging process, minor differences are inevitable existence to interfere with the algorithm process.Fig. 16 shows a box plot where the same color box represents 20 repeated measurements of the same antenna in a certain state.The deviation of the repeated measurements is basically within

TABLE V YOLO AND PIDINET ACCURACY METRICS EVALUATION
±1.5 • , and most of data deviations are less than 1 .It can be observed from each repeated test that the overall recognition stability is satisfactory.These findings provide evidence of the algorithm's and ability to produce reliable recognition results for images under the same angle states.Despite minor variations, the algorithm demonstrates stability, which is crucial for its effectiveness and practical applicability.

C. Recognition Accuracy
1) Evaluation of Algorithm: First, the accuracy of the algorithms is evaluated and summarized in Table V.
Three objects have been recognized by YOLO, all of which score above 0.98 under the mAP50 metrics.This indicates that YOLO object detection is capable of recognizing nearly all instances in this dataset with high accuracy.In the case of PiDiNet, the detection focuses solely on the center and end edges.Both the center and end edge detection F-1 scores surpass 0.76, demonstrating sufficient accuracy for the edge detection operator.
2) Evaluation of the System: In the accuracy validation experiment, a repeated rotation experiment of 40 antennas that are intentionally rotated by 180 • based on the feedback from the encoders is conducted.Fig. 16 shows the difference of the average values of "State1-State2" as a representative with a black arrow, and it can be found that most of the differences remain around ±0.3 • , which shows that our recognition accuracy remains at a high level.
Of course, more difference data can be provided for us to analyze.When counting the recognition angle differences of 40 antennas with 20 repetitions in each group with a total 48 000 recognition accuracy data, such data is drawn as the recognition error statistical histogram with 180 • rotation in Fig. 17.It can be observed that the histogram shows 83% of the recognition error falls into the range of (−1 • , 1 • ).
Overall, our recognition system has demonstrated commendable accuracy, with the majority of errors falling within the 1 • range.This level of accuracy is considered highly satisfactory for the requirements of phased array applications.

D. Measurement Uncertainty
To further verify the observed accuracy of our proposed measurement system, and based on the uncertainty sources examined, the measurement uncertainty of our approach is analyzed as follows.
1) YOLO Uncertainty: First, YOLO recognizes the objects, achieving a relatively high success rate (mAP50 achieve 0.986).Initially, YOLO recognizes the targets, achieving a relatively high success rate.However, it is possible to encounter cases of missed detection or incorrected detection.In the event of a missed detection, it is highly likely that a correct angle cannot be outputted.In such cases, the proposed algorithm can receive error information for evaluation.On the other hand, incorrected detection presents a more complex situation, where a high probability exists of outputting an erroneous angle value, making it difficult to ascertain the specific incorrect angle value.The error recognition rate can be evaluated based on the data obtained from the tests.By determining that the error in our test data exceeds 5 • as YOLO judgment error, the YOLO accuracy can be defined as 99%.2) PiDiNet Uncertainty: Second, the process of obtaining the coordinates of the center and end centers based on the Hough transform for circles is the most uncertain part of the entire algorithmic process.The mathematical Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.In this case, it can be proved that the system measurement accuracy is satisfactory and the measurement uncertainty can be quantified as a normal distributed function.
Fig. 18.Time axis for our system, image acquisition, and image processing of each process of the system are carried out simultaneously.

E. Recognition Efficiency
The recognition speed is a significant feature of our system.Although using a small test array for testing purposes, it is plausible to estimate the speed of the entire system, as shown in Fig. 18.
As mentioned before, our integrated system design with HPAA provides great convenience for recognition and calibration at any time.In addition, the time spent by our recognition system is mainly divided into the shooting time of the image, which is the time to get the image by moving the line-scan camera through the mechanical device and processing the image time.After shooting a column of images at a time, the images of the column can enter the algorithm processing part, thus, when the system is photographing the next column, it is also processing and recognizing the images of the previous column at the same time, and recording the phase information of each antenna.
Specifically, as demonstrated in Fig. 18, t 0 means the start time, T N = t 1 − t 0 means the startup time of the algorithm, in which the time to initialize the neural network is the most time-consuming, and this process may take about 3 s.T M = t 2 − t 1 means how long it captures a column of pictures, and T T = t 3 − t 2 means the transmission time of the image.After the transmission is completed, our algorithm will recognize and process the image and finally get the relevant angle value, and this time cost is T D = t 4 − t 3 .Because the image processing always lags behind the image acquisition, therefore, the final running time of the whole system is the time t 5 after the end of the last column of image processing.
In practice, the speed of a line-scan camera to collect images is faster than the speed of image recognition.For our example, our test array is 200-mm long in each column.It only takes (200/690) = 0.29 s to image one column, together with the time for column shifting, it could take less than 1 s in total, but the average image processing time is 5 s/column.In addition, the startup time of a neural network may be about 3 s, thus the acquisition time can essentially be disregarded, and the image processing time can be considered as the overall system run time.
Because the size of each antenna array is different, and the size, antenna spacing, arrangement, and number of antennas in the array are different, it is difficult to give a specific timeconsuming formula.Generally speaking, the time-consuming T could represent as follows: Considering the example of our experimental array, which consists of five rows and eight columns with a total of 40-element antennas, the total time required for accurate angle recognition would be 3 + 8 × 5 = 43 s.
It is worth noting that this is just a small-scale HPAA, and our system can complete accurate angle recognition within 1 min, whereas manual recognition typically takes more than 5 min.Furthermore, when dealing with larger arrays, the manual process becomes significantly more time-consuming and prone to reduced accuracy.By utilizing the MV method, our system not only reduces the time required for recognition, but also maintains a high level of accuracy and stability.

V. AND OUTLOOK
In this work, an MV-based method for HPAA phase recognition and calibration is proposed, and a recognition system that integrates with the HPAA and utilizes a line-scan camera to shoot the array rapidly and precisely is designed and constructed.Our algorithm involves the use of YOLOv8 for the recognition of antennas, centers, and ends, and employs a pretrained PiDiNet for edge detection on the centers and ends, which enables better circular fitting of the centers and ends, leading to improved accuracy.Additionally, an imaging correction model for the helical antenna of the line-scan camera is established, allowing us to accurately obtain the final phase data.Overall, this approach significantly enhances the efficiency of calibration work for HPAA arrays.
However, certain limitations, for example, the mechanical tolerance during the fabrication of the antenna element will cause further uncertain errors, and using a monocular camera cannot completely eliminate FOVE, which needs to be further investigated during the deployment of this proposed approach.Future works are considered to explore the deployment of lenses with higher magnification to increase the number of usable pixels and better improve recognition accuracy and to implement a multicamera system to capture 3-D information of the antenna element for more accurate segmentation.

Manuscript received 15
July 2023; revised 7 October 2023; accepted 13 October 2023.Date of publication 1 November 2023; date of current version 10 November 2023.This work was supported in part by the Natural Science Foundation of Sichuan Province under Project 2022NSFSC0567 and in part by the Fundamental Research Funds for the Central Universities under Project 2682023CX077.The Associate Editor coordinating the review process was Dr. Valentina Bianchi.(Corresponding author: Song Qiu.)

Fig. 1 .
Fig. 1.Two general antenna calibration methods.Antenna tower for outfield test and microwave anechoic chamber made of absorbing materials for infield test.

Fig. 3 .
Fig. 3. Basic element helical antenna.(a) Front view.(b) Left view.(c) Top view.(d) Definition of its angle value (zero from the left) of the helical antenna when it rotates around the axis.

Fig. 5 .
Fig. 5. Top view of the whole image acquisition system nested on the HPAA.

Fig. 6 .
Fig. 6.Recognition process of the proposed line-scan MV strategy.

Fig. 9 .
Fig. 9. Different operators have different effects on center and end edge detection.(a) Original image.(b) Canny.(c) PiDiNet trained on a public dataset.(d) PiDiNet trained on our dataset.

Fig. 11 .
Fig. 11.Schematic depicts the imaging process of a line-scan camera capturing an element helical antenna.(a) Triangular relationship between the projection of the end and its actual position.(b) FOVE only exists in the direction perpendicular to the direction of motion.

Fig. 12 .
Fig. 12. Move strategy of the acquisition system and image preprocessing.(a) Camera center moves in columns along the antenna center.(b) Take long photographs with the same number of columns.(c) Each image is cropped to an appropriate size for preprocessing.

Fig. 14 .
Fig. 14.Schematic of the experimental platform, the experiment is carried out on a 5 × 8 small test HPAA.The experiment is conducted repeatedly to collect data.

Fig. 15 (
b) illustrates the Hough quasicircle fitting process based on the edge results.During this process, the FOVE is corrected for the central quasicircle.The green circle represents the fitting result within the edge detection, while the yellow circle represents the circle obtained after correcting for the FOVE.It can be observed that the yellow circle and the green circle align closely in a column situated at the center of the image pixels, indicating minimal FOVE around the center of the camera.Fig.15(c)shows the final recognition angle results for each element antenna.Fig.15(d)shows an additional recognition result which was after rotating the antennas by 180 • .

Fig. 16 .
Fig. 16.Boxplot of entire data of our experiment.The data of 20 repeated recognitions are put into the same box, and the same color represents the same rotating state.The state difference of "State1-State2" is given as a representative shown with the black arrow.
procedures involved are complex, and the specific pixel distribution of the image can hardly quantified.Based on the analysis of the experimental data, it is believed that the error of the approximate circle center of PiDiNet is within circle range with the real value as the center and a radius of ten pixel errors.3) FOVE Correction Uncertainty: The FOVE correction uncertainty is caused by the tolerance of antenna installation and fabrication, as mentioned in Section II-D2, the calculation of the correction value DE necessitates the values AB and BD associated with the antenna array.Specifically, AB corresponds to the operational distance, while BD denotes the horizontal separation between the center and the end of the antenna.Theoretically, AB and BD both should be constant values, but the uncertainty of installation height will lead to the change of AB, and the fabrication tolerance of the antenna itself will lead to the change of BD.Based on the examination of our test antenna elements and installation position, we consider that both AB and BD exhibit an uncertainty of ±0.1 mm.Based on our analysis above, the composition of uncertainty is summarized in Table VI.4) Statistical Analysis of the Uncertainty: Further to statistically analyze the histogram of recognition errors, curve-fitting analysis is applied in Fig. 17.Based on the fitting results, the recognition error data can be fit with a normal distribution with mean µ = −0.016• , standard deviation σ = 0.7431 • , and quantitative fitting accuracy of RSME = 0.23.

TABLE I DATASET
COLLECTED UNDER DIFFERENT LIGHT CONDITIONS

TABLE II RESULTS
OF DIFFERENT SCALE MODELS ON THE HELICAL ANTENNA DATASET

TABLE III PLATFORMS
USED IN EXPERIMENTS AND THE HYPERPARAMETERS

TABLE VI UNCERTAINTY
CONTRIBUTORS BASED ON EXPERIMENTS AND ANALYSIS Fig. 17.Statistical analysis of the experimental recognition difference to 180 • with normal distribution fit.