Runchong Dong, Jing Ma, Xingpei Chen, Wang Jianhua
In the actual operation process, some of the power system bad data identification methods have the problem of low accuracy, for this reason, a deep learning-based power system bad data identification method is designed to improve this defect. The data is collected from power system users, the phase deviation caused by non-integer sampling is reduced by high sampling rate, the measurement signal period is obtained, the operational state of the distribution network is evaluated based on deep learning, the state vector is calculated, the maximum standard residual value is found, the location of the bad data is obtained, and the bad data identification method is designed. Experimental results: The mean accuracy of the power system bad data identification method in the paper is: 78.26%, which indicates that the designed power system bad data identification method performs better after fully integrating the deep learning.
{"title":"A deep learning-based approach for identifying bad data in power systems","authors":"Runchong Dong, Jing Ma, Xingpei Chen, Wang Jianhua","doi":"10.1117/12.2682551","DOIUrl":"https://doi.org/10.1117/12.2682551","url":null,"abstract":"In the actual operation process, some of the power system bad data identification methods have the problem of low accuracy, for this reason, a deep learning-based power system bad data identification method is designed to improve this defect. The data is collected from power system users, the phase deviation caused by non-integer sampling is reduced by high sampling rate, the measurement signal period is obtained, the operational state of the distribution network is evaluated based on deep learning, the state vector is calculated, the maximum standard residual value is found, the location of the bad data is obtained, and the bad data identification method is designed. Experimental results: The mean accuracy of the power system bad data identification method in the paper is: 78.26%, which indicates that the designed power system bad data identification method performs better after fully integrating the deep learning.","PeriodicalId":440430,"journal":{"name":"International Conference on Electronic Technology and Information Science","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124426319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, the global prevalence of chronic kidney disease (CKD) has been increasing year by year, and heavy metals that are widely distributed in the environment are nephrotoxic, leading to possible kidney damage and affecting human health. Therefore, this study used laboratory heavy metal data from the National Health and Nutrition Examination Survey (NHANES) to select the main heavy metals that affect the kidney by fusing SHAP values and XGBoost algorithm of heavy metal selection method. Later, we combined Odds Ratio (OR) of heavy metals and quartiles of different population risk subgroups to validate the feature selection results. We found that the selected blood lead and urinary cadmium had a strong effect to CKD and the results were statistically significant. the method based on SHAP and XGBoost could discover the possible causal factors in vivo.
近年来,全球慢性肾脏疾病(CKD)患病率逐年上升,环境中广泛分布的重金属具有肾毒性,可能导致肾脏损害,影响人体健康。因此,本研究采用国家健康与营养检查调查(National Health and Nutrition Examination Survey, NHANES)的实验室重金属数据,通过融合SHAP值和重金属选择方法的XGBoost算法,选择影响肾脏的主要重金属。随后,我们将重金属的比值比(Odds Ratio, OR)与不同人群风险亚组的四分位数相结合,对特征选择结果进行验证。我们发现选定的血铅和尿镉对CKD有很强的影响,结果有统计学意义。基于SHAP和XGBoost的方法可以在体内发现可能的病因。
{"title":"Main heavy metals affecting chronic kidney disease: a study based on feature selection algorithm","authors":"Yan-bin Wu, Shu Deng","doi":"10.1117/12.2682554","DOIUrl":"https://doi.org/10.1117/12.2682554","url":null,"abstract":"In recent years, the global prevalence of chronic kidney disease (CKD) has been increasing year by year, and heavy metals that are widely distributed in the environment are nephrotoxic, leading to possible kidney damage and affecting human health. Therefore, this study used laboratory heavy metal data from the National Health and Nutrition Examination Survey (NHANES) to select the main heavy metals that affect the kidney by fusing SHAP values and XGBoost algorithm of heavy metal selection method. Later, we combined Odds Ratio (OR) of heavy metals and quartiles of different population risk subgroups to validate the feature selection results. We found that the selected blood lead and urinary cadmium had a strong effect to CKD and the results were statistically significant. the method based on SHAP and XGBoost could discover the possible causal factors in vivo.","PeriodicalId":440430,"journal":{"name":"International Conference on Electronic Technology and Information Science","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133814543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To address the problem of low accuracy of traditional low-resolution radar target classification and recognition. In this paper, a low-resolution radar target classification algorithm based on a one-dimensional Densely Connected Convolutional Network (DenseNet) is proposed. The algorithm first directly downscales the Densely Connected Convolutional Network, then takes the original 1D radar target signal as the input for training, uses a segmented loss function for the characteristics of different classes of signals, makes the network use different loss functions in different training stages, and then back-propagates the loss to optimize the weights to improve the recognition effect of the network. The experimental results show that the recognition rate of the proposed method is higher than that of traditional radar target classification methods and simple one-dimensional convolutional neural networks (CNN) for low-spectral radar target classification, especially under low signal-to-noise ratio conditions, which fully demonstrates the effectiveness of the proposed method.
{"title":"Low-resolution radar target classification algorithm based on one-dimensional densely connected network","authors":"Meibin Qi, Kan Wang","doi":"10.1117/12.2682379","DOIUrl":"https://doi.org/10.1117/12.2682379","url":null,"abstract":"To address the problem of low accuracy of traditional low-resolution radar target classification and recognition. In this paper, a low-resolution radar target classification algorithm based on a one-dimensional Densely Connected Convolutional Network (DenseNet) is proposed. The algorithm first directly downscales the Densely Connected Convolutional Network, then takes the original 1D radar target signal as the input for training, uses a segmented loss function for the characteristics of different classes of signals, makes the network use different loss functions in different training stages, and then back-propagates the loss to optimize the weights to improve the recognition effect of the network. The experimental results show that the recognition rate of the proposed method is higher than that of traditional radar target classification methods and simple one-dimensional convolutional neural networks (CNN) for low-spectral radar target classification, especially under low signal-to-noise ratio conditions, which fully demonstrates the effectiveness of the proposed method.","PeriodicalId":440430,"journal":{"name":"International Conference on Electronic Technology and Information Science","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133982807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper a 1 to 4 time-delay power-divider was provided. The phase and amplitude relation between each port was simulated. The time-delay between the port was 0.016 ns in the x-direction and 0.16 ns in the x-direction. The maximum amplitude difference between each port was 1 dB and the maximum phase difference error was ±6°. The S11 value was lower than -20 dB. The proposed time delay power divider can be applied to VICTS antenna to enhance the instantaneous bandwidth.
{"title":"On time-delay power divider","authors":"Zheng Liu, Jian Zhang, Xuetang Lei","doi":"10.1117/12.2682534","DOIUrl":"https://doi.org/10.1117/12.2682534","url":null,"abstract":"In this paper a 1 to 4 time-delay power-divider was provided. The phase and amplitude relation between each port was simulated. The time-delay between the port was 0.016 ns in the x-direction and 0.16 ns in the x-direction. The maximum amplitude difference between each port was 1 dB and the maximum phase difference error was ±6°. The S11 value was lower than -20 dB. The proposed time delay power divider can be applied to VICTS antenna to enhance the instantaneous bandwidth.","PeriodicalId":440430,"journal":{"name":"International Conference on Electronic Technology and Information Science","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131206011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eye tracking technology can show how people focus their attention and emotionally react to their surroundings. In this study, wearable eye tracker was used to conduct eye movement experiments in realistic environment. For signal processing of the data, a finite impulse response (FIR) filter was chosen, and an eye movement data set was created. First, 26 features were chosen by a machine learning algorithm for emotion recognition, and the average rate of recognition on GDBT was 71.1%. 22 noteworthy correlation features were chosen after Spearman and emotion state were used for correlation analysis. GDBT has a recognition rate of 74.61%, while XGBoost has a recognition rate of 75.63%. The experimental results prove the validity of our data set and provide data support for the next research.
{"title":"Research on emotion recognition of eye movement in realistic environment","authors":"Changdi Hong, Jinlan Wang, Yuanxu Wang, T. Ning, Jinmiao Song, Xiaodong Duan","doi":"10.1117/12.2682524","DOIUrl":"https://doi.org/10.1117/12.2682524","url":null,"abstract":"Eye tracking technology can show how people focus their attention and emotionally react to their surroundings. In this study, wearable eye tracker was used to conduct eye movement experiments in realistic environment. For signal processing of the data, a finite impulse response (FIR) filter was chosen, and an eye movement data set was created. First, 26 features were chosen by a machine learning algorithm for emotion recognition, and the average rate of recognition on GDBT was 71.1%. 22 noteworthy correlation features were chosen after Spearman and emotion state were used for correlation analysis. GDBT has a recognition rate of 74.61%, while XGBoost has a recognition rate of 75.63%. The experimental results prove the validity of our data set and provide data support for the next research.","PeriodicalId":440430,"journal":{"name":"International Conference on Electronic Technology and Information Science","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124930833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongyu Wang, Xiaodan Zhang, Chen Quan, Tong Zhao, Huali Du
Since Qinghai is located in the high-altitude Qinghai-Tibet Plateau region, the geomorphological types are complex and diverse, and the distribution of ground precipitation observation stations is sparse, improving the accuracy of precipitation data is critical for studying regional ecological change over time. In the paper, we study and construct a multi-source precipitation data fusion model based on neural networks, which consists of back propagation neural network (BPNN) and long short-term memory network (LSTM). The global precipitation measurement (GPM), fifth generation ECMWF atmospheric reanalysis (ERA5), digital elevation model (DEM), and normalized difference vegetation index (NDVI) data are selected as feature data and ground observation station data as label data for model training. The results show that the fused data generated by the BP-LSTM model reduces the root mean square error to 2.48mm and the overall relative bias to 0.25% compared with the original GPM, which is better than ERA5 on data accuracy. The precipitation event capture capability is improved, which is very close to the ERA5 data with strong precipitation event capture capability, and the probability of detection, false alarm rate, and missing event rate are 0.95, 0.53, and 0.04 respectively. Finally, the regional precipitation data is generated by the fusion model with resolution of 0.01°, 1h. The model proposed in the paper incorporates topographic factors and seasonal characteristics to solve the temporal and spatial correlation of precipitation data in Qinghai Province improve the accuracy of precipitation data, and provide reliable data support for the study of regional hydro-ecological spatial and temporal variation patterns.
{"title":"A study of regional precipitation data fusion model based on BP-LSTM in Qinghai province","authors":"Hongyu Wang, Xiaodan Zhang, Chen Quan, Tong Zhao, Huali Du","doi":"10.1117/12.2682392","DOIUrl":"https://doi.org/10.1117/12.2682392","url":null,"abstract":"Since Qinghai is located in the high-altitude Qinghai-Tibet Plateau region, the geomorphological types are complex and diverse, and the distribution of ground precipitation observation stations is sparse, improving the accuracy of precipitation data is critical for studying regional ecological change over time. In the paper, we study and construct a multi-source precipitation data fusion model based on neural networks, which consists of back propagation neural network (BPNN) and long short-term memory network (LSTM). The global precipitation measurement (GPM), fifth generation ECMWF atmospheric reanalysis (ERA5), digital elevation model (DEM), and normalized difference vegetation index (NDVI) data are selected as feature data and ground observation station data as label data for model training. The results show that the fused data generated by the BP-LSTM model reduces the root mean square error to 2.48mm and the overall relative bias to 0.25% compared with the original GPM, which is better than ERA5 on data accuracy. The precipitation event capture capability is improved, which is very close to the ERA5 data with strong precipitation event capture capability, and the probability of detection, false alarm rate, and missing event rate are 0.95, 0.53, and 0.04 respectively. Finally, the regional precipitation data is generated by the fusion model with resolution of 0.01°, 1h. The model proposed in the paper incorporates topographic factors and seasonal characteristics to solve the temporal and spatial correlation of precipitation data in Qinghai Province improve the accuracy of precipitation data, and provide reliable data support for the study of regional hydro-ecological spatial and temporal variation patterns.","PeriodicalId":440430,"journal":{"name":"International Conference on Electronic Technology and Information Science","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114039282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Directional modulation (DM) based on phased array (PA) can realize angle-dependent secure transmission. In this paper, frequency agile array (FAA) based DM technique is proposed to achieve range-angle-dependent secure transmission. Different from the conventional frequency diverse array (FDA), whose frequency offsets applied to the array is fixed, FAA can achieve a distortionless constellation at the target location and randomly distorted constellations at other locations by changing the frequency offsets at the symbol rate. Two frequency offset selection schemes are presented. The first scheme randomly selects the frequency offset applied to each element from a given set, and the received signal is randomly distorted both in amplitude and phase except the target location. The second scheme selects the optimal frequency offsets with lower sidelobe based on ant colony algorithm (ACO). Further, the sidelobe level is relaxed appropriately to seek multiple near optimal solutions on the basis of the optimal frequency offsets. The simulation results show that the proposed method generates higher bit error rate (BER) at non-target locations and narrower information beamwidth near the target location, which provides better secure transmission performance compared with the conventional FDA.
{"title":"Frequency agile array based directional modulation","authors":"Yiwen Zhang, Yougen Xu","doi":"10.1117/12.2682405","DOIUrl":"https://doi.org/10.1117/12.2682405","url":null,"abstract":"Directional modulation (DM) based on phased array (PA) can realize angle-dependent secure transmission. In this paper, frequency agile array (FAA) based DM technique is proposed to achieve range-angle-dependent secure transmission. Different from the conventional frequency diverse array (FDA), whose frequency offsets applied to the array is fixed, FAA can achieve a distortionless constellation at the target location and randomly distorted constellations at other locations by changing the frequency offsets at the symbol rate. Two frequency offset selection schemes are presented. The first scheme randomly selects the frequency offset applied to each element from a given set, and the received signal is randomly distorted both in amplitude and phase except the target location. The second scheme selects the optimal frequency offsets with lower sidelobe based on ant colony algorithm (ACO). Further, the sidelobe level is relaxed appropriately to seek multiple near optimal solutions on the basis of the optimal frequency offsets. The simulation results show that the proposed method generates higher bit error rate (BER) at non-target locations and narrower information beamwidth near the target location, which provides better secure transmission performance compared with the conventional FDA.","PeriodicalId":440430,"journal":{"name":"International Conference on Electronic Technology and Information Science","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128202068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The soil bioremediation process of coking sites is complex, the site environment is harsh, and the project period is long. Compared with the fields of water and air pollution monitoring, the informatization level of soil bioremediation project is low, and it is urgent to improve the digitalization and intelligence. Through the design of an online monitoring and electronic inspection system for the bioremediation process of coke contaminated soil and the development of intelligent early warning software, a study of information-specific technologies and data models for coke contamination remediation has been conducted. This paper focuses on three core elements of this field, including multidimensional data collection technologies such as Internet of Things and image recognition, big data processing technologies realized by relying on communication modules and cloud platform databases, and the construction of a neural network computational model for the soil bioremediation process. The information system has been tried out in the pilot process of soil bioremediation, realizing information management functions such as monitoring the operation status of sensors, inspection management, equipment's own status management, online monitoring and alarming of soil bioremediation parameters, and trend prediction of future soil parameters, forming a new generation of intelligent supervision system for soil bioremediation sites.
{"title":"Design and application of an intelligent monitoring and early warning system for bioremediation of coking contaminated sites","authors":"Xiaowen Wang, Wensi Wang, NiYun Yang, XiaoWei Wang, Fuyang Wang, Xiaoshu Wei, Yanping Ji, Wangxin Chen, Mengyi Zheng","doi":"10.1117/12.2682342","DOIUrl":"https://doi.org/10.1117/12.2682342","url":null,"abstract":"The soil bioremediation process of coking sites is complex, the site environment is harsh, and the project period is long. Compared with the fields of water and air pollution monitoring, the informatization level of soil bioremediation project is low, and it is urgent to improve the digitalization and intelligence. Through the design of an online monitoring and electronic inspection system for the bioremediation process of coke contaminated soil and the development of intelligent early warning software, a study of information-specific technologies and data models for coke contamination remediation has been conducted. This paper focuses on three core elements of this field, including multidimensional data collection technologies such as Internet of Things and image recognition, big data processing technologies realized by relying on communication modules and cloud platform databases, and the construction of a neural network computational model for the soil bioremediation process. The information system has been tried out in the pilot process of soil bioremediation, realizing information management functions such as monitoring the operation status of sensors, inspection management, equipment's own status management, online monitoring and alarming of soil bioremediation parameters, and trend prediction of future soil parameters, forming a new generation of intelligent supervision system for soil bioremediation sites.","PeriodicalId":440430,"journal":{"name":"International Conference on Electronic Technology and Information Science","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114761627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In meta-learning based small-sample HRRP recognition, HRRP data is one-dimensional, and the amount of extractable features is not as much as that of multidimensional data, so it is necessary to splice the one-dimensional data into twodimensional data to improve the recognition rate. This paper strives to reconceptualize the features among HRRP data from a two-dimensional perspective, and proposes a low noise sensitivity feature extraction based on wavelet decomposition and a low-frequency wavelet coefficient splicing method in descending order by scale to make it more applicable to the recognition of small sample targets containing noisy data. The HRRP with noise was decomposed by wavelet packet, and the lowest frequency wavelet coefficient with low noise sensitivity was extracted by wavelet packet sub-band energy and cosine similarity, and then spliced in descending order of scale, combined with the original data to form two-dimensional data, and trained with neural networks. The experiments show that the proposed method has obvious advantages in recognition accuracy, dependence on the number of samples and feature extraction ability.
{"title":"A new method of feature splicing based on wavelet transform for recognition of HRRP with noise","authors":"Junmeng Cui, Ning Fang, Yihua Qin, Xiucheng Shen","doi":"10.1117/12.2682358","DOIUrl":"https://doi.org/10.1117/12.2682358","url":null,"abstract":"In meta-learning based small-sample HRRP recognition, HRRP data is one-dimensional, and the amount of extractable features is not as much as that of multidimensional data, so it is necessary to splice the one-dimensional data into twodimensional data to improve the recognition rate. This paper strives to reconceptualize the features among HRRP data from a two-dimensional perspective, and proposes a low noise sensitivity feature extraction based on wavelet decomposition and a low-frequency wavelet coefficient splicing method in descending order by scale to make it more applicable to the recognition of small sample targets containing noisy data. The HRRP with noise was decomposed by wavelet packet, and the lowest frequency wavelet coefficient with low noise sensitivity was extracted by wavelet packet sub-band energy and cosine similarity, and then spliced in descending order of scale, combined with the original data to form two-dimensional data, and trained with neural networks. The experiments show that the proposed method has obvious advantages in recognition accuracy, dependence on the number of samples and feature extraction ability.","PeriodicalId":440430,"journal":{"name":"International Conference on Electronic Technology and Information Science","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132813307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aiming at the problem that Tracking accuracy of Tracking-Learning-Detection (TLD) tracking algorithm decreases when targets are under different light and shade conditions and target scales change, an improved TLD tracking algorithm is proposed. In this paper, Speeded Up Robust Features (SURF) feature point matching method was adopted as the tracking module, and the feature point pairs with low confidence were removed by adding the evaluation of feature point pairs. By introducing Contrast Limited Adaptive Histogram Equalization (CLAHE) into the detection module, a random Circle feature classifier is proposed, and the HOG feature matching method is used to replace the normalized correlation matching method in the nearest neighbor classifier. In addition, the detection range is adjusted adaptively, which reduces the computational complexity and effectively improves the adaptability of the algorithm to multi-scale. Experimental results show that the proposed algorithm can effectively overcome the influence of environmental shading conditions, and has strong robustness to scale changes and high tracking accuracy. Compared with the classical TLD algorithm, the improved algorithm performs better.
{"title":"An improved target tracking learning detection algorithm","authors":"Yang Gao, Changbo Xu, Shaozhong Cao","doi":"10.1117/12.2682442","DOIUrl":"https://doi.org/10.1117/12.2682442","url":null,"abstract":"Aiming at the problem that Tracking accuracy of Tracking-Learning-Detection (TLD) tracking algorithm decreases when targets are under different light and shade conditions and target scales change, an improved TLD tracking algorithm is proposed. In this paper, Speeded Up Robust Features (SURF) feature point matching method was adopted as the tracking module, and the feature point pairs with low confidence were removed by adding the evaluation of feature point pairs. By introducing Contrast Limited Adaptive Histogram Equalization (CLAHE) into the detection module, a random Circle feature classifier is proposed, and the HOG feature matching method is used to replace the normalized correlation matching method in the nearest neighbor classifier. In addition, the detection range is adjusted adaptively, which reduces the computational complexity and effectively improves the adaptability of the algorithm to multi-scale. Experimental results show that the proposed algorithm can effectively overcome the influence of environmental shading conditions, and has strong robustness to scale changes and high tracking accuracy. Compared with the classical TLD algorithm, the improved algorithm performs better.","PeriodicalId":440430,"journal":{"name":"International Conference on Electronic Technology and Information Science","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130792778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}