In tennis, momentum is pivotal and can be quantified using metrics like Consecutive Win Rate (CWR), Unforced Error Rate (UER), Break Point Save Rate (BPSR), and Fatigue Factor (FF). Each metric provides insight into a player's performance and state during a match. CWR is a clear momentum indicator, reflecting a player's game dominance, while UER highlights potential lapses in concentration or physical condition. BPSR evaluates a player's clutch performance in critical situations, and FF gauges physical exertion. Utilizing logistic regression, we can predict a player's probability to win at any scoring point, incorporating these metrics as variables. The coefficients obtained from MATLAB analysis (e.g., p1_cwr at 22.73 and p2_ff at -3.26) reveal the positive or negative correlation of these factors with a player's winning chances. In the case of the "2023-wimbledon-1301" match, the logistic model's predictions showed a symmetrical distribution of win probabilities between players, suggesting a balance in momentum swings throughout the match. Initial volatility in Player 1's success rate indicated a strong start, which diminished over time, possibly due to fatigue or the opponent's improving performance. Despite the fluctuations and a period of deadlock, Player 1's consistent performance and superior win rate for most of the game secured the victory. This outcome underscores the importance of maintaining momentum and physical resilience in tennis.
{"title":"Football Momentum Analysis based on Logistic Regression","authors":"Zilu Wen, Jinyu Liu, Chenxi Liu","doi":"10.54097/jbsh1q88","DOIUrl":"https://doi.org/10.54097/jbsh1q88","url":null,"abstract":" In tennis, momentum is pivotal and can be quantified using metrics like Consecutive Win Rate (CWR), Unforced Error Rate (UER), Break Point Save Rate (BPSR), and Fatigue Factor (FF). Each metric provides insight into a player's performance and state during a match. CWR is a clear momentum indicator, reflecting a player's game dominance, while UER highlights potential lapses in concentration or physical condition. BPSR evaluates a player's clutch performance in critical situations, and FF gauges physical exertion. Utilizing logistic regression, we can predict a player's probability to win at any scoring point, incorporating these metrics as variables. The coefficients obtained from MATLAB analysis (e.g., p1_cwr at 22.73 and p2_ff at -3.26) reveal the positive or negative correlation of these factors with a player's winning chances. In the case of the \"2023-wimbledon-1301\" match, the logistic model's predictions showed a symmetrical distribution of win probabilities between players, suggesting a balance in momentum swings throughout the match. Initial volatility in Player 1's success rate indicated a strong start, which diminished over time, possibly due to fatigue or the opponent's improving performance. Despite the fluctuations and a period of deadlock, Player 1's consistent performance and superior win rate for most of the game secured the victory. This outcome underscores the importance of maintaining momentum and physical resilience in tennis.","PeriodicalId":504530,"journal":{"name":"Frontiers in Computing and Intelligent Systems","volume":"11 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140252494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper mainly studies the abnormal behavior detection technology of Simmental cattle, aiming to establish an efficient and reliable abnormal behavior detection system, so as to detect abnormal situations in time and take corresponding measures to deal with potential problems. At the same time, by establishing a dataset for the abnormal behavior of Simmental cattle, the study uses deep learning algorithms to accurately capture the existence, location and key body parts of Simmental cattle, and accurately identify abnormal behaviors such as convulsions and falls. The application of abnormal behavior detection technology to animal husbandry to achieve contactless, automated, and efficient monitoring of Simmental cattle behavior can provide advanced and comprehensive intelligent health management solutions for animal husbandry and promote the wide application of intelligent management in animal husbandry. Real-time monitoring, early warning and fine management of Simmental cattle behavior can effectively reduce the risk of disease transmission and improve the production safety of farms.
{"title":"Research on Abnormal Behavior Detection Technology for Simmental Cattle","authors":"Yizhao Jia, Lihao Qin, Dan He, Na Li","doi":"10.54097/0cc8c798","DOIUrl":"https://doi.org/10.54097/0cc8c798","url":null,"abstract":"This paper mainly studies the abnormal behavior detection technology of Simmental cattle, aiming to establish an efficient and reliable abnormal behavior detection system, so as to detect abnormal situations in time and take corresponding measures to deal with potential problems. At the same time, by establishing a dataset for the abnormal behavior of Simmental cattle, the study uses deep learning algorithms to accurately capture the existence, location and key body parts of Simmental cattle, and accurately identify abnormal behaviors such as convulsions and falls. The application of abnormal behavior detection technology to animal husbandry to achieve contactless, automated, and efficient monitoring of Simmental cattle behavior can provide advanced and comprehensive intelligent health management solutions for animal husbandry and promote the wide application of intelligent management in animal husbandry. Real-time monitoring, early warning and fine management of Simmental cattle behavior can effectively reduce the risk of disease transmission and improve the production safety of farms.","PeriodicalId":504530,"journal":{"name":"Frontiers in Computing and Intelligent Systems","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140253809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In light of the worsening global climate, providing predictive models for surface temperature and energy consumption is crucial for formulating effective climate action strategies. Initially, a Bi-directional Long Short-Term Memory (BiLSTM) network model is established to predict the maximum surface temperatures over the next century, with the Seasonal AutoRegressive Integrated Moving Average (SARIMA) model serving as a benchmark. To assess the risk levels of climate security, the k-means clustering algorithm is utilized to classify the growth rates of carbon dioxide emissions, enabling the construction of a three-tier climate security early warning index. Subsequently, a hybrid classification model based on Support Vector Machine (SVM) and Random Forest (RF) takes the energy consumption growth rates as inputs and the warning indices as outputs to construct a climate security early warning system. The BiLSTM model is employed to predict the energy consumption growth rates for the upcoming decade, and these rates are input into the SVM-RF model to forecast future warning levels. The study demonstrates that the model can effectively predict the maximum surface temperatures and provide a three-tier safety warning system for future climate risk management. The intent of this research is to offer a novel tool for global climate prevention and to deliver practical application value to policymakers in finance, energy, and environmental sectors.
{"title":"Construction of a Climate Early Warning System: Predicting Future Temperatures and Climate Security Using BiLSTM","authors":"Jie Yang, Zijun Li","doi":"10.54097/zscep661","DOIUrl":"https://doi.org/10.54097/zscep661","url":null,"abstract":"In light of the worsening global climate, providing predictive models for surface temperature and energy consumption is crucial for formulating effective climate action strategies. Initially, a Bi-directional Long Short-Term Memory (BiLSTM) network model is established to predict the maximum surface temperatures over the next century, with the Seasonal AutoRegressive Integrated Moving Average (SARIMA) model serving as a benchmark. To assess the risk levels of climate security, the k-means clustering algorithm is utilized to classify the growth rates of carbon dioxide emissions, enabling the construction of a three-tier climate security early warning index. Subsequently, a hybrid classification model based on Support Vector Machine (SVM) and Random Forest (RF) takes the energy consumption growth rates as inputs and the warning indices as outputs to construct a climate security early warning system. The BiLSTM model is employed to predict the energy consumption growth rates for the upcoming decade, and these rates are input into the SVM-RF model to forecast future warning levels. The study demonstrates that the model can effectively predict the maximum surface temperatures and provide a three-tier safety warning system for future climate risk management. The intent of this research is to offer a novel tool for global climate prevention and to deliver practical application value to policymakers in finance, energy, and environmental sectors.","PeriodicalId":504530,"journal":{"name":"Frontiers in Computing and Intelligent Systems","volume":"21 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140253883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the field of fresh food supermarkets, due to the relatively limited shelf-life of vegetable products, their quality will gradually deteriorate with the sales time. Therefore, supermarkets need to perform daily rationing replenishment operations based on the historical sales data and demand of the products. This paper is based on the collected product information of vegetable categories, detailed vegetable sales flow data, wholesale prices of vegetable products, and recent vegetable product wastage rates. Data analysis and visualization techniques are used to analyze the distribution pattern of vegetable sales in each category and single product. Next, a functional relationship between total sales volume and cost-plus pricing of vegetable categories was constructed. Regression forecasts were used to simulate the future wholesale prices of vegetable categories.
{"title":"Optimizing the Sales Law and Replenishment Decision Analysis of Fresh Supermarket Products","authors":"Xinchong Wang, Xinxiang Wang","doi":"10.54097/semg8c08","DOIUrl":"https://doi.org/10.54097/semg8c08","url":null,"abstract":"In the field of fresh food supermarkets, due to the relatively limited shelf-life of vegetable products, their quality will gradually deteriorate with the sales time. Therefore, supermarkets need to perform daily rationing replenishment operations based on the historical sales data and demand of the products. This paper is based on the collected product information of vegetable categories, detailed vegetable sales flow data, wholesale prices of vegetable products, and recent vegetable product wastage rates. Data analysis and visualization techniques are used to analyze the distribution pattern of vegetable sales in each category and single product. Next, a functional relationship between total sales volume and cost-plus pricing of vegetable categories was constructed. Regression forecasts were used to simulate the future wholesale prices of vegetable categories.","PeriodicalId":504530,"journal":{"name":"Frontiers in Computing and Intelligent Systems","volume":"29 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140252126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wencheng Wang, Quandi Wu, Pengcheng Zhang, Tianci Liu
Surface garbage in small water bodies poses a severe threat to the safety of residents’ water usage. Focusing on the characteristics of surface garbage in small water bodies, a robot capable of collecting light floating debris and animal carcasses have been designed. This paper accomplishes the design process of a detachable structure surface garbage cleaning robot suitable for collecting floating garbage in small water areas, the calculation process of the draft and travel resistance, and the selection and calculation process of the propulsion device. Furthermore, the robot has been fabricated, and it is capable of cleaning various types of waste, thereby providing a guiding document for the structural design of surface garbage cleaning equipment.
{"title":"Design of a Detachable Surface Garbage Cleaning Robot Suitable for Small Water Areas","authors":"Wencheng Wang, Quandi Wu, Pengcheng Zhang, Tianci Liu","doi":"10.54097/2ejtmt52","DOIUrl":"https://doi.org/10.54097/2ejtmt52","url":null,"abstract":"Surface garbage in small water bodies poses a severe threat to the safety of residents’ water usage. Focusing on the characteristics of surface garbage in small water bodies, a robot capable of collecting light floating debris and animal carcasses have been designed. This paper accomplishes the design process of a detachable structure surface garbage cleaning robot suitable for collecting floating garbage in small water areas, the calculation process of the draft and travel resistance, and the selection and calculation process of the propulsion device. Furthermore, the robot has been fabricated, and it is capable of cleaning various types of waste, thereby providing a guiding document for the structural design of surface garbage cleaning equipment.","PeriodicalId":504530,"journal":{"name":"Frontiers in Computing and Intelligent Systems","volume":"12 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140253934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of Internet medicine provides a convenient platform for doctor-patient communication, and provides an important data source for paying attention to the health of the elderly. In this study, a large number of consultation records obtained from the medical website was screened and cleaned, and the medical thesaurus was generated by training of its own data to solve the problem of inaccurate professional terminology segmentation caused by using default jieba segmentation. At the same time, we use the trained medical thesaurus to conduct topic mining of the consultation data, and it is found that the most concerned problems in the field of elderly health are cerebro-cardiovascular, pulmonary and stomach diseases, so as to provide further medical advice and targeted services.
{"title":"Analysis of Health Data for the Elderly based on Medical Website Mining","authors":"Xue Tian, Xia Wang, Ying Li","doi":"10.54097/smq5vf11","DOIUrl":"https://doi.org/10.54097/smq5vf11","url":null,"abstract":"The development of Internet medicine provides a convenient platform for doctor-patient communication, and provides an important data source for paying attention to the health of the elderly. In this study, a large number of consultation records obtained from the medical website was screened and cleaned, and the medical thesaurus was generated by training of its own data to solve the problem of inaccurate professional terminology segmentation caused by using default jieba segmentation. At the same time, we use the trained medical thesaurus to conduct topic mining of the consultation data, and it is found that the most concerned problems in the field of elderly health are cerebro-cardiovascular, pulmonary and stomach diseases, so as to provide further medical advice and targeted services.","PeriodicalId":504530,"journal":{"name":"Frontiers in Computing and Intelligent Systems","volume":"10 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140253951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A method based on plane segmentation and dimensionality reduction for extracting incomplete and slow contour features of object point clouds is proposed. The method consists of two main steps: plane segmentation and contour extraction. In plane segmentation, the random sample consensus (Random Sample Consensus, RANSAC) algorithm is optimized based on principal component analysis (Principal Component Analysis, PCA); the optimized planar point cloud is then subjected to dimensionality reduction, and the contour features are extracted using gradients. Experimental results show that the method can effectively segment point clouds and extract the contours of target surfaces, and has great potential for application in industrial inspection and other fields.
{"title":"A Point Cloud Contour Extraction Method based on Plane Segmentation","authors":"Lei Lu, Ran Gao, Wei Pan, Wenming Tang","doi":"10.54097/jfqs4b09","DOIUrl":"https://doi.org/10.54097/jfqs4b09","url":null,"abstract":"A method based on plane segmentation and dimensionality reduction for extracting incomplete and slow contour features of object point clouds is proposed. The method consists of two main steps: plane segmentation and contour extraction. In plane segmentation, the random sample consensus (Random Sample Consensus, RANSAC) algorithm is optimized based on principal component analysis (Principal Component Analysis, PCA); the optimized planar point cloud is then subjected to dimensionality reduction, and the contour features are extracted using gradients. Experimental results show that the method can effectively segment point clouds and extract the contours of target surfaces, and has great potential for application in industrial inspection and other fields.","PeriodicalId":504530,"journal":{"name":"Frontiers in Computing and Intelligent Systems","volume":"10 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140254136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zero-shot learning (ZSL) in the field of computer vision refers to enabling the model to recognize and understand categories that have not been encountered during the training phase. It is particularly critical for object detection and segmentation tasks, because these tasks require the model to have good generalization capabilities to unknown categories. Object detection requires the model to determine the location of the object, while segmentation further requires the precise demarcation of the object's boundaries. In ZSL research, knowledge representation and transfer are core issues. Researchers have tried to use semantic attributes as a knowledge bridge to connect categories seen during the training phase and categories not seen during the testing phase. These attributes may be color, shape, etc., but this method requires accurate attribute annotation, which is often not easy to achieve in practice. Therefore, researchers have begun to explore the use of non-visual information such as knowledge maps and text descriptions to enrich the recognition capabilities of models, but this also introduces the challenge of information integration and alignment. At present, ZSL has made certain progress in target detection and segmentation tasks, but there is still a significant gap compared with traditional supervised learning. This is mainly due to the limited ability of ZSL models to generalize to new categories. To this end, researchers have begun to explore combining ZSL with other technologies, such as generative adversarial networks (GANs) and reinforcement learning, to enhance the model's detection and segmentation capabilities for new categories. Future research needs to focus on several aspects. The first is how to design a more effective knowledge representation and transfer mechanism so that the model can better utilize existing knowledge. The second step is to develop new algorithms to improve the performance of ZSL in complex environments. In addition, research should focus on how to reduce the dependence on computing resources so that the ZSL method can run effectively in resource-limited environments. In summary, the research on target detection and segmentation technology of zero-shot learning is a cutting-edge topic in the field of computer vision. Despite the challenges, with the deepening of research, we expect these technologies to contribute to improving the generalization ability and intelligence level of computer vision systems.
{"title":"Target Detection and Segmentation Technology for Zero-shot Learning","authors":"Zongzhi Lou, Linlin Chen, Tian Guo, Zhizhong Wang, Yuxuan Qiu, Jinyang Liang","doi":"10.54097/v7tbh549","DOIUrl":"https://doi.org/10.54097/v7tbh549","url":null,"abstract":"Zero-shot learning (ZSL) in the field of computer vision refers to enabling the model to recognize and understand categories that have not been encountered during the training phase. It is particularly critical for object detection and segmentation tasks, because these tasks require the model to have good generalization capabilities to unknown categories. Object detection requires the model to determine the location of the object, while segmentation further requires the precise demarcation of the object's boundaries. In ZSL research, knowledge representation and transfer are core issues. Researchers have tried to use semantic attributes as a knowledge bridge to connect categories seen during the training phase and categories not seen during the testing phase. These attributes may be color, shape, etc., but this method requires accurate attribute annotation, which is often not easy to achieve in practice. Therefore, researchers have begun to explore the use of non-visual information such as knowledge maps and text descriptions to enrich the recognition capabilities of models, but this also introduces the challenge of information integration and alignment. At present, ZSL has made certain progress in target detection and segmentation tasks, but there is still a significant gap compared with traditional supervised learning. This is mainly due to the limited ability of ZSL models to generalize to new categories. To this end, researchers have begun to explore combining ZSL with other technologies, such as generative adversarial networks (GANs) and reinforcement learning, to enhance the model's detection and segmentation capabilities for new categories. Future research needs to focus on several aspects. The first is how to design a more effective knowledge representation and transfer mechanism so that the model can better utilize existing knowledge. The second step is to develop new algorithms to improve the performance of ZSL in complex environments. In addition, research should focus on how to reduce the dependence on computing resources so that the ZSL method can run effectively in resource-limited environments. In summary, the research on target detection and segmentation technology of zero-shot learning is a cutting-edge topic in the field of computer vision. Despite the challenges, with the deepening of research, we expect these technologies to contribute to improving the generalization ability and intelligence level of computer vision systems.","PeriodicalId":504530,"journal":{"name":"Frontiers in Computing and Intelligent Systems","volume":"85 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140254737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, 3D object reconstruction based on phase-shifting profilometry has gradually received attention and been widely applied. Domestic and foreign scholars have been continuously researching and exploring the accuracy and speed of three-dimensional measurement, and gradually developing towards dynamic measurement. Most dynamic measurements require projecting multiple stripe patterns to obtain sufficient object phase information, and the more stripes there are, the greater the phase error caused by motion. This article proposes the use of increasing the sampling fringe pattern during the projection period of a fringe pattern to achieve high frame rate dynamic 3D object reconstruction. By combining the intensity values of ambient light; Finally, the phase information of the object is extracted by tracking the motion information obtained from the moving object. This article demonstrates the feasibility of this method through simulation experiments and improves the frame rate of 3D reconstruction of moving objects.
{"title":"Three Dimensional Reconstruction of Two-step Moving Objects based on Phase-shifting Profilometry","authors":"Yitao Liang, Jiabei Dai, Lei Lu","doi":"10.54097/a7w6dn37","DOIUrl":"https://doi.org/10.54097/a7w6dn37","url":null,"abstract":"In recent years, 3D object reconstruction based on phase-shifting profilometry has gradually received attention and been widely applied. Domestic and foreign scholars have been continuously researching and exploring the accuracy and speed of three-dimensional measurement, and gradually developing towards dynamic measurement. Most dynamic measurements require projecting multiple stripe patterns to obtain sufficient object phase information, and the more stripes there are, the greater the phase error caused by motion. This article proposes the use of increasing the sampling fringe pattern during the projection period of a fringe pattern to achieve high frame rate dynamic 3D object reconstruction. By combining the intensity values of ambient light; Finally, the phase information of the object is extracted by tracking the motion information obtained from the moving object. This article demonstrates the feasibility of this method through simulation experiments and improves the frame rate of 3D reconstruction of moving objects.","PeriodicalId":504530,"journal":{"name":"Frontiers in Computing and Intelligent Systems","volume":"15 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140254402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The dark channel priori dehaze algorithm based on minimum filtering is known to consume a significant amount of computational and storage resources for transmittance optimization, resulting in issues such as halo phenomena in gray and white areas of the image. In contrast to this, the proposed algorithm in this paper offers a novel approach to dark channel image dehazing. By leveraging dark channel a priori knowledge, the algorithm introduces an adaptive adjustment factor to enhance the realism of restored image details. Furthermore, the algorithm employs guided filtering for transmittance map refinement instead of traditional image keying. Subsequently, the haze-free image is reconstructed using the estimated atmospheric light and refined transmittance maps based on the atmospheric scattering model. Post image restoration, brightness and contrast are enhanced, and image optimization is achieved through adaptive contrast histogram equalization to improve visual quality. The experimental findings reveal that the proposed algorithm not only accelerates the efficiency of image dehazing but also sustains color fidelity in gray and white regions, yielding aesthetically pleasing outcomes.
{"title":"Research and Analysis of Dark Channel Priori Dehazing Algorithm based on Guided Filtering","authors":"Haisheng Song, Nian Liu","doi":"10.54097/t7knrd65","DOIUrl":"https://doi.org/10.54097/t7knrd65","url":null,"abstract":"The dark channel priori dehaze algorithm based on minimum filtering is known to consume a significant amount of computational and storage resources for transmittance optimization, resulting in issues such as halo phenomena in gray and white areas of the image. In contrast to this, the proposed algorithm in this paper offers a novel approach to dark channel image dehazing. By leveraging dark channel a priori knowledge, the algorithm introduces an adaptive adjustment factor to enhance the realism of restored image details. Furthermore, the algorithm employs guided filtering for transmittance map refinement instead of traditional image keying. Subsequently, the haze-free image is reconstructed using the estimated atmospheric light and refined transmittance maps based on the atmospheric scattering model. Post image restoration, brightness and contrast are enhanced, and image optimization is achieved through adaptive contrast histogram equalization to improve visual quality. The experimental findings reveal that the proposed algorithm not only accelerates the efficiency of image dehazing but also sustains color fidelity in gray and white regions, yielding aesthetically pleasing outcomes.","PeriodicalId":504530,"journal":{"name":"Frontiers in Computing and Intelligent Systems","volume":"17 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140252707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}