Welcome to the new IOA website! Please reset your password to access your account.

Sensing of aircraft position through IoT camera system installed with a fisheye lens. Maeyama Takashi 1 , Asakura Takumi Tokyo University of Science 2641 Yamazaki, Noda, Chiba 278-8510, Japan Mori Junichi, Morinaga Makoto Kanagawa University 3-27-1 Rokkakubashi, Kanagawa-ku, Yokohama, Kanagawa 221-8686, Japan Nishino Kentaro, Yokoshima Shigenori Kanagawa Environment Research Center 1-3-39 Shinomiya, Hiratsuka, Kanagawa 254-0014, Japan Yamamoto Ippei Defense Structure Improvement Foundation 15-9 Honshiotyo, Yotsuya, Shinjuku-ku, Tokyo 160-0003, Japan

ABSTRACT Airport radars using GPS or radio waves are often used to sense aircraft flight positions. These techniques can be applied for sensing in a wide range of areas, but they tend to be interrupted due to the radar blind spots resulting from the low flight altitude in the vicinity of an airfield. For managing or investigating aircraft operations, three-dimensional measurement in the local area from the ground is effective. So, we have developed an IoT camera system installed with a fisheye lens and measured the flight path of aircraft passing overhead at multiple points near the end of the runway. As a result, it was found that our camera system can detect continuous flight paths even if the flight altitude is low. 1. INTRODUCTION

The most common method of estimating the aircraft flight path is to detect them by the airport radars using GPS or radio waves, which are installed at most airfields. Although these estimation technologies have high performance and they can cover a wide area, the low flight altitude in the vicinity of an airfield tends to occur blind spots for radar, and measurements tend to be interrupted. For air traffic management and survey, three-dimensional measurement in such localized areas is necessary. For this purpose, we have developed aircraft flight estimation technology since 2018 using IoT device equipped with a fisheye lens [1] .

There are many previous studies focusing on detection of the flying aircraft using image captured. For example, Davies et al. studied the effectiveness of the Kalman filter for the small aircraft detection in low-contrast images [2] . Rozantsev et al. also used motion detection to detect flying aircraft from camera images [3] , and Doyle et al. combined motion detection and pan/tilt techniques to develop a

1 7521550@ed.tus.ac.jp

worm 2022

system capable of real-time drone tracking [4] . In these studies, the detection of aircraft from images captured by a single-focus camera was considered using a proprietary algorithm. However, there are no findings yet that aim to detect aircraft in all directions using fisheye lens.

Regardless of the field of aircraft technology, there have also been many studies in the meteoro- logical field that have used fisheye lens to capture images of the sky above. Those studies started around 2018, and have attempted to estimate cloud cover, cloud-shaped, sunshine, etc. from the sky images taken [5-7] . We also started to develop our system around 2018, at which time the infrastructure for IoT devices was in place, with high-precision single-board computers, high-precision sensors, low cost of large-capacity recording media, motion detection, image processing, and simplified machine learning methods. This paper is organized as follows. The configuration of the device is firstly de- scribed. Then, the algorithm for detecting the aircraft flight displacement is indicated. Finally, a case study of flight path estimation was discussed. 2. MOTION DETECTION METHOD

worm 2022

To detect the flight position of the aircraft, we have developed an IoT sensing device that can detect the position of moving objects. Detailed methods are described as follows.

2.1. IoT Sensing Device

A configuration diagram of the IoT device is shown in Figure 1. A single board computer (Rasp- berry Pi 3 model B) is used for video recording and encoding. Network module (4GPi, MechaTracks Co., Ltd.) is used for managing time measurement. Power management module (SleePi, MechaTracks Co., Ltd.) is used for controlling a supplied voltage. A Raspberry Pi camera module (VR220, Entaniya Co., Ltd.) is used as the camera tool, and a fisheye lens (RP-L220, Entaniya Co., Ltd.) that can capture images with an angle of view of up to 220° is mounted on it. These modules are protected by a simple waterproof case assuming sudden rainfall. The device’s tilt and reference azimuth are adjusted in the field using an electronic compass and a level.

Figure 1: Adopted IoT sensing device

2.2. Motion Detection

The captured image using IoT device and each of the analysis step of motion detection are shown in Figures 2 and 3, respectively. In this proposed method, the elevation angle is estimated from the distance between the central coordinate of the fisheye lens circle and the coordinate at the object's center of gravity. The azimuth angle is estimated from the angle formed by these two coordinates and the reference azimuth. The former coordinate is detected by Hough transform of the capturing area of the fisheye lens surrounded by red line in Figure 2. The latter coordinate is calculated with a simple method. Specifically, the difference value between the input image and the moving average image is calculated, and the difference image is converted to grayscale and then binarized. The white area of the binarized image in Figure 3(b) is composed of many dots. Hence, this white area cannot be iden- tified as a moving object. Therefore, the value of the pixels on the white area is reflected in the sur- rounding black area with dilation processing shown in Figure 3(c), and the discontinuous white dots are combined. The object contour is detected from the image, and the object position in the image is obtained by calculating the coordinates at the center of contour gravity.

worm 2022

Calculation of the elevation angle in the captured image requires a conversion ratio between the number of pixels and the angle. The ratio is obtained by capturing image of makers on a room with elevation angle shown in Figure 4. The number of pixels between the markers in the image was measured and used.

Figure 2: Captured image using the IoT device Figure 3: Detected aircraft in the air

in each analysis step

2.3. Angle Detection Error

Through the distance-to-angle conversion ratio test, we have also checked the error of the meas- ured angle of the lens. The diagram of measurement is shown in Figure 4. The results are shown in Figure 5. The vertical axis is the difference between the camera elevation angle and the marker ref- erence elevation angle, and the horizontal axis is the direction of elevation angle. In this test, the camera azimuth is rotated by 60° each. The gray line is the error at each azimuth angle and the black line is the arithmetic mean.

The lens used has a constant error of about 1° at elevation angles higher than 40°, but at lower elevation angles, the error is highly non-uniform especially at around 20° due to lens distortion. Therefore, the error in the elevation angle measurement is offset by adding the results of this test.

worm 2022

Figure 4: Measurement scheme of elevation angles Figure 5: Result of angle measurement 3. DETECTION OF AIRCRAFT POSITIONS

In the proposed method of motion detection, all moving objects captured by the camera are de- tected. Therefore, only the aircraft position needs to be extracted from all the moving objects data to calculate the aircraft flight path. For the extraction of aircraft position, we adopt the technique of image determination with machine learning.

3.1. Training Data for Machine Learning

When using machine learning technique, it is necessary to collect training data set with labeling. As a method of acquiring data, we used the aircraft 3D model and generate that virtually and mechan- ically by using computer graphics. We used the 3D models of the eight aircraft (C-12, C-130, P-1, P- 3, P-8, SH-60, TC-90, and UC-35) stationed at Naval Air Facility Atsugi to be measured in the case study. As the formats of the 3D models were not unified, we modified the formats of the 3D models such as deleting textures and fixing the center position of captured image (aircraft center of gravity) in advance to avoid error in the machine learning process. All aircraft images with unified formatting are shown in Figure 6. The aircraft and background colors of these 3D models are changed in 7 levels of grayscale for the former and 10 levels, mainly blue, for the latter, in order to diversify the training data. The rendered 3D models with such variable combinations of contrasts are output while the cap- ture angles are rotated by 30° in the X, Y, and Z axes, respectively. The number of images for each model is 120,960 data, and an equal number of randomly selected real images other than aircraft are added to the data to enable the classification of non-aircraft images such as trees and cars as noise information, resulting in a data set of approximately 1.09 million images.

‘Angle difference (camero=Reference) (ee — anal ference im each direction eget elevation (Se)

worm 2022

Figure 6: Images of the three-dimensionally modeled aircraft

3.2. Deep Learning with CNN

For the deep learning algorithm, a CNN is used, which is effective for image determination, and the PyTorch is used as the framework. Resnet18 is selected as the model structure, hyperparameters are set at their initial values, and the probability of the output layer is predicted by a softmax function. The training model is validated by k-fold cross validation with the number of partitions set to 10, and the classification performance is determined by the percentage of accuracy, precision, recall, and F measure. As a result, the output accuracy using this learning model was 100 %.

3.3. Noise Reduction method

The method of noise reduction from the motion detection is shown in Figure 7. Most aircraft in the vicinity of an airfield fly on takeoff/landing and airfield traffic pattern. If the runway direction is known, the direction in which the aircraft passes can be narrowed down. For example, in Figure 2, the reference azimuth (Upward in the image) is north, and the runway is in the same direction, so a helicopter flying in the area pass through a 90° azimuth. The determination of the aircraft by CNN using the training set is focused on the image of the moving object passing through this cross-section. In addition, histograms of all detected moving objects are output beforehand. Only the aircraft posi- tion is extracted from the components of all moving objects by comparing the aircraft histogram at the CNN decision point with histograms before and after setting time. In comparing the histograms, the correlation coefficient is set to be greater than 0.85 and the time interval before and after to be less than 2.0 seconds. An example of the result is shown in Figure 8. The gray plot is before noise reduction and the red plot is after noise reduction. Although some error remains in the low elevation angle, the aircraft positions can be extracted.

Figure 7: Noise reduction for motion detection Figure 8: Noise reduction results

‘Time fremsss.0}

4. CALCULATION OF FLIGHT PATH

4.1. Method

A general image of the proposed calculation method of the flight path is shown in Figure 9. The three-dimensional coordinates of the flight is calculated based on the results of the aircraft displace- ment detection at two or more points by synchronizing the time of devices. Specifically, the normal vectors are output from the azimuth and elevation angles at both measurement point. The equation of a plane is derived from the coordinates of one measurement point, the normal vector, and the aircraft's coordinate. Then, a linear equation is derived from the coordinates of the other point, the normal vector, and the aircraft’s coordinates. Finally, the position of the aircraft is calculated by solving the simultaneous equation of the plane and the line equation. The flight path is obtained by repeated calculations.

worm 2022

Figure 9: The method of flight path calculation 4.2. Results and discussion

To confirm the validity of the algorithm, an experimental case study was conducted based on the results obtained at the locations around Naval Air Facility Atsugi shown in Figure 10. The blue line indicates the airfield traffic pattern. The width of the blue in the figure indicates the estimated flight path that allows higher aircraft turning performance. Therefore, in order to increase the width over which the aircraft can be captured by the camera, measurement points were set at three locations of Site A, B and C in the transverse direction to the path.

Figure 10: Location of measurement points of A, B, and C, and the considered flight path

The estimated results of the flight path are shown in Figures 11(a), 11(b), and 11(c). These results were calculated using the method mentioned in Chapter 4. Site A-C shows the flight path of an aircraft calculated from the detection of the aircraft at the two measurement points of Sites A and C. The same is true of Site A-B, and Site B-C. Firstly, by comparing the results of Site A-B and Site A-C, the path of Site A-B is longer than that of Site A-C in Figure 11(a). The distance between Sites A and C is far, and the overlapping area shown in Figure 9 is narrow. However, looking at Figure 11(b) and Figure 11(c), the flight altitude of Site A-B and Site A-C isn’t constant. It is thought that the adjust- ment of camera’s tilt was not enough at installation. Looking at Sites B-C, the variance of the results is higher than in the other combinations. These two measurement points are not located across the flight path. It is confirmed that such a measurement point arrangement affects the flight path calcula- tion. A method to search effective locations of the measurement points should be further investigated as a future work.

worm 2022

Figure 11: Flight path calculated by this technology; (a), (b) and (c), respectively. 5. CONCLUSIONS

In this study, we developed an IoT camera system to estimate the flight path of an aircraft in the vicinity of an airfield. Based on the above experimental case study, the validity of using the method of aircraft displacement detection in the vicinity of the airfield was confirmed.

6. REFERENCES

1. J. Mori, M. Morinaga, I. Yamamoto, T. Yokota, K. Makino, Y. Hiraguri, “Development of air-

craft tracking camera system for sound power level measurement of aircraft noise”, Proc. of Inter- noise, (2019). 2. D. Davies, P. Palmer, M. Mirmehdi, “Detection and Tracking of Very Small Low Contrast Ob-

jects”, BMVC, pp. 2–10, (1998). 3. A. Rozantsev, V. Lepetit, P. Fua, “Flying Objects Detection from a Single Moving Came”, 2015

IEEE CVPR, pp. 4128–4136, (2015).

4. D. Doyle, L. Jennings, T. Black, “Optical flow background estimation for real-time pan/tilt cam-

era object tracking”, Measurement, Vol. 48, pp. 195–207, (2014). 5. L. Chapman, J. E. Thornes, “Real-Time Sky-View Factor Calculation and Approximation”, J.

Atmos. Ocean. Technol. , Vol. 2, pp. 730–741, (2004). 6. G. Gil, M. Ramirez, “Fish-eye camera and image processing for commanding a solar tracker”,

Heliyon, Vol. 5, e01398, (2019). 7. S. Jeanne, F. Colas, B. Zanda, M. Birlan, J. Vaubaillon, S. Bouley, P. Vernazza, L. Jorda, J.Gat-

tacceca, J. L. Rault, A. Carbognani, D. Gardiol, H. Lamy, D. Baratoux, C. Blanpain, A. Malgoyre, J. Lecubin, C. Marmo, “Calibration of fish-eye lens and error estimation on fireball trajectories: application to the FRIPON network”, Astron. Astrophys. , Vol. 627, A78, (2019).

worm 2022