Tomato is one of the important economic forest fruits in my country. It is the fourth largest vegetable and fruit in my country with an annual output of about 55 million tons, accounting for 7% of the total vegetables. Due to the wide planting area, large yield, and high-quality vegetables are the development direction of modern agriculture. Therefore, this paper adopts the deep learning method, uses the CNN to collect the leaves of tomato diseases and pest detection, uses the stacking to detect the diseases and insect pests on the leaves with the optimized DenseNet121 and MobileNet-V2, and compares the individual DenseNet121 model and MobileNet-V2 model. It shows that the detection results of pests and diseases after fusion are higher than other algorithms, and the final detection accuracy reaches 98.24%, which effectively improves the detection accuracy. It provides a more effective method for the treatment of tomato diseases and insect pests.
{"title":"Research on tomato leaf disease identification based on deep learning","authors":"Kunao Zhang, Zhenxing Liang","doi":"10.1117/12.2667384","DOIUrl":"https://doi.org/10.1117/12.2667384","url":null,"abstract":"Tomato is one of the important economic forest fruits in my country. It is the fourth largest vegetable and fruit in my country with an annual output of about 55 million tons, accounting for 7% of the total vegetables. Due to the wide planting area, large yield, and high-quality vegetables are the development direction of modern agriculture. Therefore, this paper adopts the deep learning method, uses the CNN to collect the leaves of tomato diseases and pest detection, uses the stacking to detect the diseases and insect pests on the leaves with the optimized DenseNet121 and MobileNet-V2, and compares the individual DenseNet121 model and MobileNet-V2 model. It shows that the detection results of pests and diseases after fusion are higher than other algorithms, and the final detection accuracy reaches 98.24%, which effectively improves the detection accuracy. It provides a more effective method for the treatment of tomato diseases and insect pests.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116447750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The DNase I Hypersensitive site (DHS) is the chromatin region that exhibits a hypersensitive response to cleavage by the DNase I enzyme. It is a universal marker for regulatory DNA and associated with genetic variation in a wide range of diseases and phenotypic traits. However, traditional experimental methods have limited the rapid detection of DHS as well as its development. Therefore, effective and accurate methods to explore potential DHSs need to be developed urgently. In this task, a deep learning approach called iDHS-DPPE to predict DHSs in different cell types and developmental stages of the mouse. iDHS-DPPE uses a dual-path parallel integrated neural network to identify DHSs accurately. First, the DNA sequence is segmented into 2-mers to extract information. Then, the DHSs accurately-attention model captures remote dependencies and the MSFRN model enables hierarchical information fusion. The dual models are trained separately to enhance the feature information. Finally, the ensemble decision of two models yields the prediction results, enabling the integration of information from multiple views. The average AUC across all datasets was 93.1% and 93.3% in the 5-fold cross-validation and independent testing experiments, respectively. The experimental results demonstrate that iDHS-DPPE outperforms the state-of-the-art method on all datasets, proving that iDHS-DPPE is effective and reliable for identifying DHSs.
{"title":"iDHS-DPPE: a method based on dual-path parallel ensemble decision for DNase I hypersensitive sites prediction","authors":"X. Lv, Yufeng Wang, Hongwen Liu","doi":"10.1117/12.2667447","DOIUrl":"https://doi.org/10.1117/12.2667447","url":null,"abstract":"The DNase I Hypersensitive site (DHS) is the chromatin region that exhibits a hypersensitive response to cleavage by the DNase I enzyme. It is a universal marker for regulatory DNA and associated with genetic variation in a wide range of diseases and phenotypic traits. However, traditional experimental methods have limited the rapid detection of DHS as well as its development. Therefore, effective and accurate methods to explore potential DHSs need to be developed urgently. In this task, a deep learning approach called iDHS-DPPE to predict DHSs in different cell types and developmental stages of the mouse. iDHS-DPPE uses a dual-path parallel integrated neural network to identify DHSs accurately. First, the DNA sequence is segmented into 2-mers to extract information. Then, the DHSs accurately-attention model captures remote dependencies and the MSFRN model enables hierarchical information fusion. The dual models are trained separately to enhance the feature information. Finally, the ensemble decision of two models yields the prediction results, enabling the integration of information from multiple views. The average AUC across all datasets was 93.1% and 93.3% in the 5-fold cross-validation and independent testing experiments, respectively. The experimental results demonstrate that iDHS-DPPE outperforms the state-of-the-art method on all datasets, proving that iDHS-DPPE is effective and reliable for identifying DHSs.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123495167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In today's era of big data, the logistics supply chain generates massive amounts of data at all stages, and the privacy issues of logistics data are increasingly prominent. In order to efficiently utilize the logistics data of each enterprise to meet the needs of the enterprise and achieve secure data sharing, a federated learning-based logistics data sharing scheme is proposed. Using federated learning to federate multiple sources of data for modelling, the reputation value of each enterprise is stored on the blockchain and the enterprises that provide high quality data sharing are rewarded. Finally, the effectiveness of the scheme and the impact of data quality and algorithm selection on model training are verified through simulation experiments.
{"title":"Logistics data sharing method based on federated learning","authors":"Zhihui Wang, Deqian Fu, Jiawei Zhang","doi":"10.1117/12.2667310","DOIUrl":"https://doi.org/10.1117/12.2667310","url":null,"abstract":"In today's era of big data, the logistics supply chain generates massive amounts of data at all stages, and the privacy issues of logistics data are increasingly prominent. In order to efficiently utilize the logistics data of each enterprise to meet the needs of the enterprise and achieve secure data sharing, a federated learning-based logistics data sharing scheme is proposed. Using federated learning to federate multiple sources of data for modelling, the reputation value of each enterprise is stored on the blockchain and the enterprises that provide high quality data sharing are rewarded. Finally, the effectiveness of the scheme and the impact of data quality and algorithm selection on model training are verified through simulation experiments.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116846086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hidden digital watermark is one of the important means of anti-counterfeiting to maintain copyright. By extracting the hidden digital watermark from images, audio and video through specific codes and algorithms, it can provide very strong and powerful evidence to prove one's copyright ownership. The idea is that the image is regarded as a two-dimensional matrix, and then according to the two-dimensional matrix to expand the corresponding operation, at the same time select the appropriate point, and according to the appropriate point in the digital watermark image, digital watermark embedded operation. In this paper, DCT is used to insert digital watermark data, and DCT reverse transformation method is used to realize the extraction function of digital watermark.
{"title":"Research on digital watermarking algorithm based on discrete cosine transform","authors":"Qiang Deng, Zuxu Zou","doi":"10.1117/12.2668152","DOIUrl":"https://doi.org/10.1117/12.2668152","url":null,"abstract":"Hidden digital watermark is one of the important means of anti-counterfeiting to maintain copyright. By extracting the hidden digital watermark from images, audio and video through specific codes and algorithms, it can provide very strong and powerful evidence to prove one's copyright ownership. The idea is that the image is regarded as a two-dimensional matrix, and then according to the two-dimensional matrix to expand the corresponding operation, at the same time select the appropriate point, and according to the appropriate point in the digital watermark image, digital watermark embedded operation. In this paper, DCT is used to insert digital watermark data, and DCT reverse transformation method is used to realize the extraction function of digital watermark.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129881394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuesong Zhang, G. Li, Dawei Zhang, Zhao Lv, Jianhua Tao
Multi-hop question answering aims to predict answers to questions and generate supporting facts for answers by reasoning over the content of multiple documents. The recently proposed Semantic Role Labeling Graph Reasoning Network (SRLGRN) has achieved excellent performance on multi-hop QA tasks. However, SRLGRN is lacking in modelling the textual relationships, which are import cues for reasoning. To this end, this paper proposes an enhanced SRLGRN multi-hop question answering approach by modelling textual relationships at different granularity (document relationships and sentence relationships). By modelling document relationships, a novel document filter based on document relationship threshold is designed for SRLGRN to dynamically select documents relevant to the question from multiple documents. By modelling sentence relationships, a sentence relationship-aware answer type prediction module is added to SRLGRN, which models sentences in documents as sentence graphs and then uses graph convolution network to predict answer type. The obtained answer type further guide the answer reasoning module of SRLGRN to obtain question answer with supporting facts. The experimental results show that the proposed scheme outperforms SRLGRN in terms of answer prediction and supporting fact prediction, with a 2% improvement in answer F1 metrics and a 3.1% improvement in joint F1 performance.
{"title":"Multi-hop question answering for SRLGRN augmented by textual relationship modelling","authors":"Xuesong Zhang, G. Li, Dawei Zhang, Zhao Lv, Jianhua Tao","doi":"10.1117/12.2667770","DOIUrl":"https://doi.org/10.1117/12.2667770","url":null,"abstract":"Multi-hop question answering aims to predict answers to questions and generate supporting facts for answers by reasoning over the content of multiple documents. The recently proposed Semantic Role Labeling Graph Reasoning Network (SRLGRN) has achieved excellent performance on multi-hop QA tasks. However, SRLGRN is lacking in modelling the textual relationships, which are import cues for reasoning. To this end, this paper proposes an enhanced SRLGRN multi-hop question answering approach by modelling textual relationships at different granularity (document relationships and sentence relationships). By modelling document relationships, a novel document filter based on document relationship threshold is designed for SRLGRN to dynamically select documents relevant to the question from multiple documents. By modelling sentence relationships, a sentence relationship-aware answer type prediction module is added to SRLGRN, which models sentences in documents as sentence graphs and then uses graph convolution network to predict answer type. The obtained answer type further guide the answer reasoning module of SRLGRN to obtain question answer with supporting facts. The experimental results show that the proposed scheme outperforms SRLGRN in terms of answer prediction and supporting fact prediction, with a 2% improvement in answer F1 metrics and a 3.1% improvement in joint F1 performance.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128587792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data for the current material support resource sharing and storing the urgent need of information management, the existing management platform can not provide strong support, etc., this paper presents a three-dimensional library storage material support intelligent approach, using the modern logistics technology and equipment, economic mathematics methods and information technology to design an automated library system, With digital storage intelligence, can group order and production scheduling, intelligent warehousing and other functions, the realization of the upstream and downstream system interface seamless docking, embodies intelligent management and other science and technology logistics elements, for the higher authorities to make scientific decisions and scheduling to provide a basis.
{"title":"Intelligent system design of storage materials in automatic stereo warehouse","authors":"Yuejun Shi, Peng Chen, Qian Zheng, Qinghang Li","doi":"10.1117/12.2667658","DOIUrl":"https://doi.org/10.1117/12.2667658","url":null,"abstract":"Data for the current material support resource sharing and storing the urgent need of information management, the existing management platform can not provide strong support, etc., this paper presents a three-dimensional library storage material support intelligent approach, using the modern logistics technology and equipment, economic mathematics methods and information technology to design an automated library system, With digital storage intelligence, can group order and production scheduling, intelligent warehousing and other functions, the realization of the upstream and downstream system interface seamless docking, embodies intelligent management and other science and technology logistics elements, for the higher authorities to make scientific decisions and scheduling to provide a basis.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129126985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Without any processing, the synthetic image visually unrealistic due to the established differences in the appearance of the foreground and background. In view of this situation, the task of image harmonization arises at the historic moment, and its purpose is to adjust the foreground appearance of a synthesized image to be closer to the background, thereby eliminating local visual differences. However, due to the limitation of the spatial feature interaction range in the feature extraction process, the global appearance transfer effect is not good. Therefore, to solve this problem, we propose an enhanced spatial feature interaction module. Meanwhile, we propose a back-projection up sampling module, which refines the reconstruction error during the reconstruction up sampling process and better restores the details of the reconstruction foreground. Our experiments on a public dataset, iHarmony4, show that the method effectively generates synthetic images with consistent overall appearance and enhanced detail.
{"title":"Image harmonization with spatial feature interaction and back-projection upsample","authors":"Tianyanshi Liu, Yuhang Li, Youdong Ding","doi":"10.1117/12.2667660","DOIUrl":"https://doi.org/10.1117/12.2667660","url":null,"abstract":"Without any processing, the synthetic image visually unrealistic due to the established differences in the appearance of the foreground and background. In view of this situation, the task of image harmonization arises at the historic moment, and its purpose is to adjust the foreground appearance of a synthesized image to be closer to the background, thereby eliminating local visual differences. However, due to the limitation of the spatial feature interaction range in the feature extraction process, the global appearance transfer effect is not good. Therefore, to solve this problem, we propose an enhanced spatial feature interaction module. Meanwhile, we propose a back-projection up sampling module, which refines the reconstruction error during the reconstruction up sampling process and better restores the details of the reconstruction foreground. Our experiments on a public dataset, iHarmony4, show that the method effectively generates synthetic images with consistent overall appearance and enhanced detail.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128856193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increasing structure of neural networks makes it difficult to deploy on edge devices with limited computing resources. Network pruning has become one of the most successful model compression methods in recent years. Existing works usually compress models by removing unimportant filters based on the importance. However, the importance-based algorithms tend to ignore the parameters that extract edge features with small criterion values. And recent studies have shown that the existing criteria rely on norm and lead to similar model compression structures. Aiming at the problems of ignoring edge features and manually specifying the pruning rate in current importance-based model pruning algorithms, this paper proposes an automatic recognition framework for neural network structure redundancy based on reinforcement learning. First, we perform cluster analysis on the filters of each layer, and map the filters into a multi-dimensional space to generate similar sets with different functions. We then propose a criterion for identifying redundant filters within similar sets. Finally, we use reinforcement learning to automatically optimize the cluster dimension, and then determine the optimal pruning rate for each layer to reduce the performance loss caused by pruning. Extensive experiments on various benchmark network architectures and datasets demonstrate the effectiveness of our proposed framework.
{"title":"A framework for automatic identification of neural network structural redundancy based on reinforcement learning","authors":"Tingting Wu, Chunhe Song, Peng Zeng","doi":"10.1117/12.2668217","DOIUrl":"https://doi.org/10.1117/12.2668217","url":null,"abstract":"The increasing structure of neural networks makes it difficult to deploy on edge devices with limited computing resources. Network pruning has become one of the most successful model compression methods in recent years. Existing works usually compress models by removing unimportant filters based on the importance. However, the importance-based algorithms tend to ignore the parameters that extract edge features with small criterion values. And recent studies have shown that the existing criteria rely on norm and lead to similar model compression structures. Aiming at the problems of ignoring edge features and manually specifying the pruning rate in current importance-based model pruning algorithms, this paper proposes an automatic recognition framework for neural network structure redundancy based on reinforcement learning. First, we perform cluster analysis on the filters of each layer, and map the filters into a multi-dimensional space to generate similar sets with different functions. We then propose a criterion for identifying redundant filters within similar sets. Finally, we use reinforcement learning to automatically optimize the cluster dimension, and then determine the optimal pruning rate for each layer to reduce the performance loss caused by pruning. Extensive experiments on various benchmark network architectures and datasets demonstrate the effectiveness of our proposed framework.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125320889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In view of the low block propagation efficiency in the blockchain network, which leads to the low throughput and high acknowledgement delay of the blockchain system, a dynamic node neighbor selection strategy is designed. According to the node elimination rate, neighbor nodes are divided into reserved nodes and replacement nodes. First, all possible combinations of neighbor node replacement are found by using the combination rule. Then, when determining the strategy of each round, the neighbor node with good transmission timeliness is selected as the reserved node to improve the block transmission efficiency. The selection process of reserved nodes is regarded as a multi arm bandit problem, and a mathematical model is established by using Boltzmann selection strategy. The parameters of the strategy, such as node elimination rate and average propagation delay, are analyzed and verified by simulation experiments. The experimental results show that this strategy can optimize the topology of the blockchain network and effectively reduce the average propagation delay of the blockchain network.
{"title":"Neighborhood node selection strategy for blockchain networks based on Boltzmann method","authors":"Chen Zhuo, Wan Guoan, Zhou Chuan","doi":"10.1117/12.2667300","DOIUrl":"https://doi.org/10.1117/12.2667300","url":null,"abstract":"In view of the low block propagation efficiency in the blockchain network, which leads to the low throughput and high acknowledgement delay of the blockchain system, a dynamic node neighbor selection strategy is designed. According to the node elimination rate, neighbor nodes are divided into reserved nodes and replacement nodes. First, all possible combinations of neighbor node replacement are found by using the combination rule. Then, when determining the strategy of each round, the neighbor node with good transmission timeliness is selected as the reserved node to improve the block transmission efficiency. The selection process of reserved nodes is regarded as a multi arm bandit problem, and a mathematical model is established by using Boltzmann selection strategy. The parameters of the strategy, such as node elimination rate and average propagation delay, are analyzed and verified by simulation experiments. The experimental results show that this strategy can optimize the topology of the blockchain network and effectively reduce the average propagation delay of the blockchain network.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125334554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The model of agricultural pest killing based on artificial intelligence is constructed. In the AI technology of intelligent agriculture, taking the characteristics of phototactic pests as an example, the expert database of pest killing is constructed by identifying the shape of pests. Build a solar powered solar light trapping platform, and build an intelligent plane rectangular coordinate system on the solar intelligent killing lamp platform. When insects are attracted to the intelligent lighting platform, the platform will automatically compare and identify the insects entering the platform according to the pest database. When they are determined to be pests, start the killing function according to the killing point coordinates provided by the platform, so as to achieve automatic identification and killing of harmful insects, Build an intelligent pest killing model.
{"title":"Construction of agricultural pest killing model based on artificial intelligence: take phototactic pests as an example","authors":"Yongsuo Zi, Wang Shu, Yinsheng Hong","doi":"10.1117/12.2667463","DOIUrl":"https://doi.org/10.1117/12.2667463","url":null,"abstract":"The model of agricultural pest killing based on artificial intelligence is constructed. In the AI technology of intelligent agriculture, taking the characteristics of phototactic pests as an example, the expert database of pest killing is constructed by identifying the shape of pests. Build a solar powered solar light trapping platform, and build an intelligent plane rectangular coordinate system on the solar intelligent killing lamp platform. When insects are attracted to the intelligent lighting platform, the platform will automatically compare and identify the insects entering the platform according to the pest database. When they are determined to be pests, start the killing function according to the killing point coordinates provided by the platform, so as to achieve automatic identification and killing of harmful insects, Build an intelligent pest killing model.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126801138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}