Yana Wen, Tingyue Wei, Kewei Cui, Bai Ling, Yahao Zhang, Meng Huang
In the era of big data, people's visual needs for data expression are increasing. In order to achieve better big data display effects, this article introduced the use of text clustering algorithms to achieve data crawling and Echarts technology to realize big data visualization. This system used mvvm's architecture and vue framework development platform, ThinkPHP was used as the background framework, and ES6 related technologies and specifications were used for application development. This system used Echarts, IView, GIS technology and JavaScript development methods to demonstrate economic big data module functions on the web side; Applied CSS3, HTML5, GIS technology to implement project achievement module and university alliance module; Applied Echarts, HTML5, JS function library technology to achieve national information module. This system used stored procedure, database index optimization technology to achieve rapid screening of massive data, and dynamically update and displayed related data through two-way data binding. This system combined real-time location technology with GIS technology to measure the distance between the user and the destination, and automatically plan the tour route to provide related services. This system can provide feasibility suggestions for strategic researchers or experts in related areas of the “Belt and Road”, and provide theoretical basis and technical support.
{"title":"Research on Belt and Road Big Data Visualization Based on Text Clustering Algorithm","authors":"Yana Wen, Tingyue Wei, Kewei Cui, Bai Ling, Yahao Zhang, Meng Huang","doi":"10.1145/3449301.3449322","DOIUrl":"https://doi.org/10.1145/3449301.3449322","url":null,"abstract":"In the era of big data, people's visual needs for data expression are increasing. In order to achieve better big data display effects, this article introduced the use of text clustering algorithms to achieve data crawling and Echarts technology to realize big data visualization. This system used mvvm's architecture and vue framework development platform, ThinkPHP was used as the background framework, and ES6 related technologies and specifications were used for application development. This system used Echarts, IView, GIS technology and JavaScript development methods to demonstrate economic big data module functions on the web side; Applied CSS3, HTML5, GIS technology to implement project achievement module and university alliance module; Applied Echarts, HTML5, JS function library technology to achieve national information module. This system used stored procedure, database index optimization technology to achieve rapid screening of massive data, and dynamically update and displayed related data through two-way data binding. This system combined real-time location technology with GIS technology to measure the distance between the user and the destination, and automatically plan the tour route to provide related services. This system can provide feasibility suggestions for strategic researchers or experts in related areas of the “Belt and Road”, and provide theoretical basis and technical support.","PeriodicalId":429684,"journal":{"name":"Proceedings of the 6th International Conference on Robotics and Artificial Intelligence","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126267333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence (AI) technology such as reinforcement learning is increasingly used in playing game in recent years. A deep reinforcement learning model was used to play the game Flappy Bird. This paper aimed to let the computer play a simple game and get the corresponding data for AI learning. Game image was sequentially scaled, grayed, and adjusted for brightness. Before the current frame entered a state, the multi-dimensional image data of several frames of image superposition and combination was processed. Deep Q Network algorithm realized the best action prediction of the game execution in a specific game state, and successfully converted a game decision problem into the classification and recognition problem of instant multi-dimensional images and solved it with a convolutional neural network. After analysis, computer players controlled by deep neural networks had better results than human players. This experiment was a model combined between a deep neural network model and reinforcement learning, and could be applied in other games.
{"title":"Implementing Game Strategies Based on Reinforcement Learning","authors":"Botong Liu","doi":"10.1145/3449301.3449311","DOIUrl":"https://doi.org/10.1145/3449301.3449311","url":null,"abstract":"Artificial intelligence (AI) technology such as reinforcement learning is increasingly used in playing game in recent years. A deep reinforcement learning model was used to play the game Flappy Bird. This paper aimed to let the computer play a simple game and get the corresponding data for AI learning. Game image was sequentially scaled, grayed, and adjusted for brightness. Before the current frame entered a state, the multi-dimensional image data of several frames of image superposition and combination was processed. Deep Q Network algorithm realized the best action prediction of the game execution in a specific game state, and successfully converted a game decision problem into the classification and recognition problem of instant multi-dimensional images and solved it with a convolutional neural network. After analysis, computer players controlled by deep neural networks had better results than human players. This experiment was a model combined between a deep neural network model and reinforcement learning, and could be applied in other games.","PeriodicalId":429684,"journal":{"name":"Proceedings of the 6th International Conference on Robotics and Artificial Intelligence","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126347885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The demand for a smart monitoring system has been increased to reduce the impact of environmental pollution. In this paper, a smart system has been proposed that includes an IoT device which can monitor the pollution and explosion level of the septic tanks as well as the surroundings. The system can detect whether a particular area is environment friendly or not. A notification system has been designed that notifies the respective individual when there exists any risk factor in the environment. The proposed design has been compared with existing approaches. The proposed system has 93.78% accuracy, 95.68% precision, and 96.52 % recall. It shows 6.11%, 3.37%, and 1.84% improvement in terms of accuracy, precision, and recall respectively over the best existing approach.
{"title":"An IoT Based Smart System to Recommend Suitable Environment","authors":"M. Hasan, Anika Nawar, M. H. Khan, Lafifa Jamal","doi":"10.1145/3449301.3449329","DOIUrl":"https://doi.org/10.1145/3449301.3449329","url":null,"abstract":"The demand for a smart monitoring system has been increased to reduce the impact of environmental pollution. In this paper, a smart system has been proposed that includes an IoT device which can monitor the pollution and explosion level of the septic tanks as well as the surroundings. The system can detect whether a particular area is environment friendly or not. A notification system has been designed that notifies the respective individual when there exists any risk factor in the environment. The proposed design has been compared with existing approaches. The proposed system has 93.78% accuracy, 95.68% precision, and 96.52 % recall. It shows 6.11%, 3.37%, and 1.84% improvement in terms of accuracy, precision, and recall respectively over the best existing approach.","PeriodicalId":429684,"journal":{"name":"Proceedings of the 6th International Conference on Robotics and Artificial Intelligence","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131224547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the climate models become more and more complicated, we are facing an enormous challenge to run these models effectively. In this paper, we discuss the acceleration of the Community Earth System Model (CESM), which is a large-scaled model with MPI parallel, but still with low execution efficiency. We have conducted an efficient study on porting the Community Land Model (CLM) which an active component within CESM onto Graphics Processing Unit (GPU), and we focus on one major routine that occupies the most execution time, namely CanopyFluxes. To expedite computation, we have put tremendous effort into developing accelerated the CESM model using GPU to parallel computing. Specifically, we conducted CUDA kernel command to optimize some matrix computations in CanopyFluxes. For further optimization, GPU caches and compiler options are used. Running on a five computing nodes cluster with five GPUs, the CanopyFluxes routine achieves a speedup of 4.21x. While in the simulation on Tianhe-2 with NVIDIA Tesla K80 GPUs, the speedup of CanopyFluxes routine raises to 14.92x.
随着气候模型变得越来越复杂,如何有效地运行这些模型正面临着巨大的挑战。本文讨论了社区地球系统模型(Community Earth System Model, CESM)的加速问题,该模型是一个具有MPI并行的大尺度模型,但执行效率仍然较低。我们对社区土地模型(Community Land Model, CLM)这个CESM中的一个有效组件移植到图形处理单元(Graphics Processing Unit, GPU)上进行了有效的研究,重点研究了占用执行时间最多的一个主要例程,即CanopyFluxes。为了加快计算速度,我们投入了大量的精力来开发使用GPU进行并行计算的加速CESM模型。具体来说,我们使用CUDA内核命令来优化CanopyFluxes中的一些矩阵计算。为了进一步优化,使用了GPU缓存和编译器选项。运行在5个计算节点和5个gpu的集群上,CanopyFluxes例程实现了4.21倍的加速。在使用NVIDIA Tesla K80 gpu的天河二号上进行仿真时,canopyflux例程的加速提升到了14.92倍。
{"title":"Efficient Executions of Community Earth System Model onto Accelerators Using GPUs","authors":"Shijin Yuan, Cheng Wang, Bin Mu, Xiaodan Luo","doi":"10.1145/3449301.3449334","DOIUrl":"https://doi.org/10.1145/3449301.3449334","url":null,"abstract":"As the climate models become more and more complicated, we are facing an enormous challenge to run these models effectively. In this paper, we discuss the acceleration of the Community Earth System Model (CESM), which is a large-scaled model with MPI parallel, but still with low execution efficiency. We have conducted an efficient study on porting the Community Land Model (CLM) which an active component within CESM onto Graphics Processing Unit (GPU), and we focus on one major routine that occupies the most execution time, namely CanopyFluxes. To expedite computation, we have put tremendous effort into developing accelerated the CESM model using GPU to parallel computing. Specifically, we conducted CUDA kernel command to optimize some matrix computations in CanopyFluxes. For further optimization, GPU caches and compiler options are used. Running on a five computing nodes cluster with five GPUs, the CanopyFluxes routine achieves a speedup of 4.21x. While in the simulation on Tianhe-2 with NVIDIA Tesla K80 GPUs, the speedup of CanopyFluxes routine raises to 14.92x.","PeriodicalId":429684,"journal":{"name":"Proceedings of the 6th International Conference on Robotics and Artificial Intelligence","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133564368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A migration to new irrigation techniques is needed in sub-Saharan African countries like Mali. The current irrigation system used in the country has flaws that need to be remedied. In this paper we will first present a literature review of different existing approaches of automatic irrigation systems, discuss their limits. Besides, we will propose an approach to an automatic irrigation system based on backscatter communication, which is more efficient and has a long-range and low power consumption.
{"title":"LoRa Backscatter Automated Irrigation Approach: Reviewing and Proposed System","authors":"Siaka Konate, Changli Li, Lizhong Xu","doi":"10.1145/3449301.3449336","DOIUrl":"https://doi.org/10.1145/3449301.3449336","url":null,"abstract":"A migration to new irrigation techniques is needed in sub-Saharan African countries like Mali. The current irrigation system used in the country has flaws that need to be remedied. In this paper we will first present a literature review of different existing approaches of automatic irrigation systems, discuss their limits. Besides, we will propose an approach to an automatic irrigation system based on backscatter communication, which is more efficient and has a long-range and low power consumption.","PeriodicalId":429684,"journal":{"name":"Proceedings of the 6th International Conference on Robotics and Artificial Intelligence","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121962986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For some industrial production processes, deep fault can be detected by data mining and data analytics of the process data. This can help to get a higher level of production quality. Anomaly detection of bolt tightening process was studied in this paper. Imbalanced data set is the main difficulty in this problem. An improved synthetic minority over-sampling technique (SMOTE) algorithm is proposed based on density-based spatial clustering of applications with noise (DBSCAN). By oversampling within-class imbalanced samples, the improved SMOTE algorithm can overcome the shortcomings of the traditional SMOTE method and can retain more sample features. As for the model feature extraction and classification, the sample classifier is trained by the Xgboost algorithm. An Experiment is carried out on a factory's real data set, which shows that the improved SMOTE algorithm can help to achieve great classification performance promotion.
{"title":"Anomaly Detection of Bolt Tightening Process Based on Improved SMOTE","authors":"Xiaolei Li, Yuxin Wu, Q. Jia","doi":"10.1145/3449301.3449304","DOIUrl":"https://doi.org/10.1145/3449301.3449304","url":null,"abstract":"For some industrial production processes, deep fault can be detected by data mining and data analytics of the process data. This can help to get a higher level of production quality. Anomaly detection of bolt tightening process was studied in this paper. Imbalanced data set is the main difficulty in this problem. An improved synthetic minority over-sampling technique (SMOTE) algorithm is proposed based on density-based spatial clustering of applications with noise (DBSCAN). By oversampling within-class imbalanced samples, the improved SMOTE algorithm can overcome the shortcomings of the traditional SMOTE method and can retain more sample features. As for the model feature extraction and classification, the sample classifier is trained by the Xgboost algorithm. An Experiment is carried out on a factory's real data set, which shows that the improved SMOTE algorithm can help to achieve great classification performance promotion.","PeriodicalId":429684,"journal":{"name":"Proceedings of the 6th International Conference on Robotics and Artificial Intelligence","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117154365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The detection of skin diseases has always been a hot topic in the medical field. With the development of deep learning, more and more neural network models have been used in medical research and have achieved good results. In this paper, based on the existing target detection model Faster R-CNN, we replace the NMS algorithm in it with Soft-NMS. The experimental results verify the effectiveness of our improvement. Compared with Faster R-CNN, our method can frame the skin disease area more accurately by reducing the misrecognized area of non-lesion areas. At the same time, our method can better deal with the situation of blurred boundaries of skin diseases. The data set we used comes from ISIC (International Skin Imaging Collaboration).
{"title":"Using combined Soft-NMS algorithm Method with Faster R-CNN model for Skin Lesion Detection","authors":"Cheng Huang, Anyuan Yu, Honglin He","doi":"10.1145/3449301.3449303","DOIUrl":"https://doi.org/10.1145/3449301.3449303","url":null,"abstract":"The detection of skin diseases has always been a hot topic in the medical field. With the development of deep learning, more and more neural network models have been used in medical research and have achieved good results. In this paper, based on the existing target detection model Faster R-CNN, we replace the NMS algorithm in it with Soft-NMS. The experimental results verify the effectiveness of our improvement. Compared with Faster R-CNN, our method can frame the skin disease area more accurately by reducing the misrecognized area of non-lesion areas. At the same time, our method can better deal with the situation of blurred boundaries of skin diseases. The data set we used comes from ISIC (International Skin Imaging Collaboration).","PeriodicalId":429684,"journal":{"name":"Proceedings of the 6th International Conference on Robotics and Artificial Intelligence","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121082703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Early detection of lung nodules is essential to the diagnosis and treatment of lung cancer. In this paper, we proposed an improved method to automatically identify the slices with lung nodules from computed tomography (CT). This deep learning-based method aimed to serve as a tool for the fast screening of lung nodules, in order to reduce CT reading time for radiologists. The proposed deep learning model combined convolutional neural networks (CNN) and variable length bidirectional Long Short-Term Memory networks (LSTM). It relied on a supervised learning approach that only required slice labels on the training dataset. The labels indicated the CT slices that contained a nodule, but not the exact location of the nodule. The proposed method was evaluated on two datasets with 5-fold cross-validation. The first dataset was collected from two 3A grade hospitals in China. It contained 1726 CT volumes (positives vs. negatives, 1:1). Each volume was labeled by at least three radiologists with more than five years of experience. The second dataset was the publicly available LIDC-IDRI database containing 888 scans, which underwent a two-phase annotation process by four experienced radiologists. For the first dataset, our method reached a high detection sensitivity of 88.2% with 0.5 false positives per CT volume. For the second dataset, we achieved a high sensitivity of 86.9% with an average of 0.8 false positives per subject. The results demonstrated that the proposed method achieved high sensitivity and specificity in identifying CT slices with lung nodules. Moreover, this study revealed that the proposed method has promising potential in reducing radiologists’ CT reading time, which only required slice labels on the training data for easy implementation.
{"title":"Nodule Slices Detection based on Weak Labels with a Novel Deep Learning Method","authors":"Rongguo Zhang, Huiling Zhang, Shaokang Wang, Kuan Chen","doi":"10.1145/3449301.3449776","DOIUrl":"https://doi.org/10.1145/3449301.3449776","url":null,"abstract":"Early detection of lung nodules is essential to the diagnosis and treatment of lung cancer. In this paper, we proposed an improved method to automatically identify the slices with lung nodules from computed tomography (CT). This deep learning-based method aimed to serve as a tool for the fast screening of lung nodules, in order to reduce CT reading time for radiologists. The proposed deep learning model combined convolutional neural networks (CNN) and variable length bidirectional Long Short-Term Memory networks (LSTM). It relied on a supervised learning approach that only required slice labels on the training dataset. The labels indicated the CT slices that contained a nodule, but not the exact location of the nodule. The proposed method was evaluated on two datasets with 5-fold cross-validation. The first dataset was collected from two 3A grade hospitals in China. It contained 1726 CT volumes (positives vs. negatives, 1:1). Each volume was labeled by at least three radiologists with more than five years of experience. The second dataset was the publicly available LIDC-IDRI database containing 888 scans, which underwent a two-phase annotation process by four experienced radiologists. For the first dataset, our method reached a high detection sensitivity of 88.2% with 0.5 false positives per CT volume. For the second dataset, we achieved a high sensitivity of 86.9% with an average of 0.8 false positives per subject. The results demonstrated that the proposed method achieved high sensitivity and specificity in identifying CT slices with lung nodules. Moreover, this study revealed that the proposed method has promising potential in reducing radiologists’ CT reading time, which only required slice labels on the training data for easy implementation.","PeriodicalId":429684,"journal":{"name":"Proceedings of the 6th International Conference on Robotics and Artificial Intelligence","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131561451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of high-throughput sequencing technology provides an opportunity to obtain multi-omics data for liver cancer,However,omics data often comes from different platforms and has different attributes, it has the characteristics of high feature dimension and small sample size. This will increase the overfitting of the model and the imbalance of categories,and the cross-platform integration analysis of omics data will challenge the traditional data analysis methods. In this regard, the Hierarchical Integrated Stacked Encoder (HI-SAE) is proposed.which can achieve deeper feature learning and data integration while reducing the differences caused by the characteristics of the data itself. Finally,the integrated feature expression is used to identify the subtype of liver cancer by softmax classifier. Experiments show that the classification accuracy when using Hi-SAE method for feature learning is 3.7% higher than that when using PCA, and 7.6% higher than that when using NMF.
{"title":"Classification of Liver Cancer Subtypes Based on Hierarchical Integrated Stacked Autoencoder","authors":"Tiantian Zhang, Shuxu Zhao, Zhaoping Zhang","doi":"10.1145/3449301.3449316","DOIUrl":"https://doi.org/10.1145/3449301.3449316","url":null,"abstract":"The development of high-throughput sequencing technology provides an opportunity to obtain multi-omics data for liver cancer,However,omics data often comes from different platforms and has different attributes, it has the characteristics of high feature dimension and small sample size. This will increase the overfitting of the model and the imbalance of categories,and the cross-platform integration analysis of omics data will challenge the traditional data analysis methods. In this regard, the Hierarchical Integrated Stacked Encoder (HI-SAE) is proposed.which can achieve deeper feature learning and data integration while reducing the differences caused by the characteristics of the data itself. Finally,the integrated feature expression is used to identify the subtype of liver cancer by softmax classifier. Experiments show that the classification accuracy when using Hi-SAE method for feature learning is 3.7% higher than that when using PCA, and 7.6% higher than that when using NMF.","PeriodicalId":429684,"journal":{"name":"Proceedings of the 6th International Conference on Robotics and Artificial Intelligence","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122662254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Low light images often have low visibility, which not only affects the visual effect, but also reduces the performance of algorithms that require high-quality input. Aiming at the problem of low light image enhancement, this paper proposes a composite enhancement algorithm. Firstly, the dark channel prior model and retinex model are combined by two adjustable parameters to obtain a new enhancement model DeRetinex. Then, according to the duality of the dehazing model and retinex theory, the image of the previous step is inverted, and the DeRetinex model is used for the second enhancement, which can eliminate the haze caused by enhancement. Compared with the existing mainstream algorithms, the proposed algorithm has the advantages of avoiding over exposure, rich texture details, low noise and high color recovery.
{"title":"Low Light Image Enhancement Algorithm Based on Retinex and Dehazing Model","authors":"Zijun Guo, Chao Wang","doi":"10.1145/3449301.3449777","DOIUrl":"https://doi.org/10.1145/3449301.3449777","url":null,"abstract":"Low light images often have low visibility, which not only affects the visual effect, but also reduces the performance of algorithms that require high-quality input. Aiming at the problem of low light image enhancement, this paper proposes a composite enhancement algorithm. Firstly, the dark channel prior model and retinex model are combined by two adjustable parameters to obtain a new enhancement model DeRetinex. Then, according to the duality of the dehazing model and retinex theory, the image of the previous step is inverted, and the DeRetinex model is used for the second enhancement, which can eliminate the haze caused by enhancement. Compared with the existing mainstream algorithms, the proposed algorithm has the advantages of avoiding over exposure, rich texture details, low noise and high color recovery.","PeriodicalId":429684,"journal":{"name":"Proceedings of the 6th International Conference on Robotics and Artificial Intelligence","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122420611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}