Pub Date : 2022-07-22DOI: 10.1109/ISPDS56360.2022.9874013
Yu-Qi Lin, Hai Shen
Agriculture is important to the development of China. The combination of traditional agriculture and modern science and technology is the hot trend in agriculture. As a new technology, wireless communication technology has great potential in the future. This paper integrates ZigBee wireless communication technology into traditional agricultural greenhouses to realize the intelligence of greenhouses. The system uses CC2530 as the core chip for data processing. And it uses YL-69 soil moisture sensor and YL-47 DHT11 temperature and humidity sensor to monitor the environmental changes in the greenhouse in real-time. The system also has an alarm module. After debugging, the system can run stably and achieve the purpose of automatically monitoring the greenhouse environment. The system has the advantages of low cost and high efficiency. And it can be used in the field of greenhouse environmental monitoring.
{"title":"Design of Greenhouse Monitoring System Based on ZigBee Technology","authors":"Yu-Qi Lin, Hai Shen","doi":"10.1109/ISPDS56360.2022.9874013","DOIUrl":"https://doi.org/10.1109/ISPDS56360.2022.9874013","url":null,"abstract":"Agriculture is important to the development of China. The combination of traditional agriculture and modern science and technology is the hot trend in agriculture. As a new technology, wireless communication technology has great potential in the future. This paper integrates ZigBee wireless communication technology into traditional agricultural greenhouses to realize the intelligence of greenhouses. The system uses CC2530 as the core chip for data processing. And it uses YL-69 soil moisture sensor and YL-47 DHT11 temperature and humidity sensor to monitor the environmental changes in the greenhouse in real-time. The system also has an alarm module. After debugging, the system can run stably and achieve the purpose of automatically monitoring the greenhouse environment. The system has the advantages of low cost and high efficiency. And it can be used in the field of greenhouse environmental monitoring.","PeriodicalId":280244,"journal":{"name":"2022 3rd International Conference on Information Science, Parallel and Distributed Systems (ISPDS)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130228063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-22DOI: 10.1109/ISPDS56360.2022.9874233
Fu Xiong, Yan Yin, Xiaofeng Zhang, G. Yang, Ye Wu, Jianhua Zhou
In order to improve the tracking accuracy of the electro-optical servo tracking system under the feedforward compound control mode, the paper proposes a method of determining the precise feedforward coefficients in line with the characteristics of the servo system through the particle swarm optimization algorithm. Combining the principle of feedforward compound control and the application method under the structure of photoelectric servo control system, the feedforward link is simplified and designed to introduce the first and second order differentials of the input tracking signal of the system; The linearly decreasing weight particle swarm algorithm with strong optimization ability is used to optimize the feedforward coefficient of the compound control system to improve the response and follow ability of the system, and a complete optimization algorithm process is designed. Through the simulation and comparison in various typical application scenarios of the airborne photoelectric tracking system, combined with the experimental results of the prototype verification, the optimized feedforward coefficient can greatly reduce the tracking misalignment angle, has both stability and applicability, and has better tracking ability for the characteristics of target relative velocity and relative acceleration.
{"title":"Optoelectronic Servo Tracking Technology Based on Particle Swarm Optimization Compound Control of Feedforward Coefficients","authors":"Fu Xiong, Yan Yin, Xiaofeng Zhang, G. Yang, Ye Wu, Jianhua Zhou","doi":"10.1109/ISPDS56360.2022.9874233","DOIUrl":"https://doi.org/10.1109/ISPDS56360.2022.9874233","url":null,"abstract":"In order to improve the tracking accuracy of the electro-optical servo tracking system under the feedforward compound control mode, the paper proposes a method of determining the precise feedforward coefficients in line with the characteristics of the servo system through the particle swarm optimization algorithm. Combining the principle of feedforward compound control and the application method under the structure of photoelectric servo control system, the feedforward link is simplified and designed to introduce the first and second order differentials of the input tracking signal of the system; The linearly decreasing weight particle swarm algorithm with strong optimization ability is used to optimize the feedforward coefficient of the compound control system to improve the response and follow ability of the system, and a complete optimization algorithm process is designed. Through the simulation and comparison in various typical application scenarios of the airborne photoelectric tracking system, combined with the experimental results of the prototype verification, the optimized feedforward coefficient can greatly reduce the tracking misalignment angle, has both stability and applicability, and has better tracking ability for the characteristics of target relative velocity and relative acceleration.","PeriodicalId":280244,"journal":{"name":"2022 3rd International Conference on Information Science, Parallel and Distributed Systems (ISPDS)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126398280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-22DOI: 10.1109/ISPDS56360.2022.9874128
Yingchun Chen, Jingliang Xue, Ou Li, Fang Dong
Application traffic identification is of great significance to improve network service quality and cyberspace security. Although deep learning has made great progress in the field of traffic identification, many existing methods rely on manually designed features for identification, or rely on inflexible neural networks for limited classification, which makes the implementation of large-scale traffic identification challenging. To solve this problem, this paper proposes a method based on deep ResNet and L2-triplet loss, which learns features from raw traffic data by taking traffic data as images, and outputs traffic features as feature embeddings. Using these feature embeddings, known and unknown application traffic identification can be further realized. This paper also uses feature constraints to improve the adaptability of neural network model in traffic identification task. On the USTC-TFC2016 dataset, the proposed method achieves a good identification performance.
{"title":"An Application Traffic Identification Method Based on Deep ResNet","authors":"Yingchun Chen, Jingliang Xue, Ou Li, Fang Dong","doi":"10.1109/ISPDS56360.2022.9874128","DOIUrl":"https://doi.org/10.1109/ISPDS56360.2022.9874128","url":null,"abstract":"Application traffic identification is of great significance to improve network service quality and cyberspace security. Although deep learning has made great progress in the field of traffic identification, many existing methods rely on manually designed features for identification, or rely on inflexible neural networks for limited classification, which makes the implementation of large-scale traffic identification challenging. To solve this problem, this paper proposes a method based on deep ResNet and L2-triplet loss, which learns features from raw traffic data by taking traffic data as images, and outputs traffic features as feature embeddings. Using these feature embeddings, known and unknown application traffic identification can be further realized. This paper also uses feature constraints to improve the adaptability of neural network model in traffic identification task. On the USTC-TFC2016 dataset, the proposed method achieves a good identification performance.","PeriodicalId":280244,"journal":{"name":"2022 3rd International Conference on Information Science, Parallel and Distributed Systems (ISPDS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125943959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-22DOI: 10.1109/ISPDS56360.2022.9874112
N. Xu, Kangkang Song, Jiangjian Xiao, Chengbin Peng
Image denoising is a fundamental problem in computer vision and has received much attention from scholars. With the fast development of convolutional neural networks, more and more deep learning-based noise reduction algorithms have emerged. However, current image denoising networks tend to apply image noise reduction only in the RGB color space, ignoring the information at the visual perception level, making the images generated by these algorithms too smooth and lacking texture and details. Therefore, this paper proposes a novel noise reduction network in the image translation area using deep learning feature space instead of the traditional RGB color space to restore more realistic and more detailed texture information in generated images. The network contains a visual perception generator and a multi-objective optimization network. The generator includes a multiscale encoding-decoding sub-network, which extracts high-level perception features from input images. The optimization network contains content consistency loss, multiscale adversarial generation loss, and discriminator feature alignment loss, which effectively retains detailed texture information in the images. We synthesized noise of suitable intensity based on publicly available data sets and conducted multiple experiments to verify the effectiveness of the algorithm. The experimental results show that the proposed algorithm significantly improves textures and details in denoised images. The algorithm removes a large amount of noise information while preserving lots of perceptual information at the visual level, generating more realistic images with detailed texture features.
{"title":"visual perception preserved denoising network in Image translation","authors":"N. Xu, Kangkang Song, Jiangjian Xiao, Chengbin Peng","doi":"10.1109/ISPDS56360.2022.9874112","DOIUrl":"https://doi.org/10.1109/ISPDS56360.2022.9874112","url":null,"abstract":"Image denoising is a fundamental problem in computer vision and has received much attention from scholars. With the fast development of convolutional neural networks, more and more deep learning-based noise reduction algorithms have emerged. However, current image denoising networks tend to apply image noise reduction only in the RGB color space, ignoring the information at the visual perception level, making the images generated by these algorithms too smooth and lacking texture and details. Therefore, this paper proposes a novel noise reduction network in the image translation area using deep learning feature space instead of the traditional RGB color space to restore more realistic and more detailed texture information in generated images. The network contains a visual perception generator and a multi-objective optimization network. The generator includes a multiscale encoding-decoding sub-network, which extracts high-level perception features from input images. The optimization network contains content consistency loss, multiscale adversarial generation loss, and discriminator feature alignment loss, which effectively retains detailed texture information in the images. We synthesized noise of suitable intensity based on publicly available data sets and conducted multiple experiments to verify the effectiveness of the algorithm. The experimental results show that the proposed algorithm significantly improves textures and details in denoised images. The algorithm removes a large amount of noise information while preserving lots of perceptual information at the visual level, generating more realistic images with detailed texture features.","PeriodicalId":280244,"journal":{"name":"2022 3rd International Conference on Information Science, Parallel and Distributed Systems (ISPDS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125662512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-22DOI: 10.1109/ISPDS56360.2022.9874157
Mai He, Shulin Yang
In order to solve the problems of centralized anti-counterfeiting system and illegal merchants copy authentic commodities at low cost and with low difficulty. This paper proposes an anti-counterfeiting system design based on the combination of blockchain and NFC tag. This system uses the Hyperledger Fabric blockchain development platform, Anti-counterfeiting certificates are used to prove that the commodities are genuine, encrypting the anti-counterfeiting certificate with ECC (Elliptic Curve Cryptography) public key forms ciphertext, then store the ciphertext in the blockchain ledger. Write the SHA256 value of the anti-counterfeiting certificate and the ECC private key, which can transform from ciphertext to anti-counterfeiting certificate, into the NFC tag. the query times can be recorded through MySQL database. It ensures the uniqueness and non replicability of the anti-counterfeiting certificate of the product, greatly improves the anti-counterfeiting performance and the difficulty of counterfeiting.
{"title":"Design of anti-counterfeiting system based on blockchain and NFC tag","authors":"Mai He, Shulin Yang","doi":"10.1109/ISPDS56360.2022.9874157","DOIUrl":"https://doi.org/10.1109/ISPDS56360.2022.9874157","url":null,"abstract":"In order to solve the problems of centralized anti-counterfeiting system and illegal merchants copy authentic commodities at low cost and with low difficulty. This paper proposes an anti-counterfeiting system design based on the combination of blockchain and NFC tag. This system uses the Hyperledger Fabric blockchain development platform, Anti-counterfeiting certificates are used to prove that the commodities are genuine, encrypting the anti-counterfeiting certificate with ECC (Elliptic Curve Cryptography) public key forms ciphertext, then store the ciphertext in the blockchain ledger. Write the SHA256 value of the anti-counterfeiting certificate and the ECC private key, which can transform from ciphertext to anti-counterfeiting certificate, into the NFC tag. the query times can be recorded through MySQL database. It ensures the uniqueness and non replicability of the anti-counterfeiting certificate of the product, greatly improves the anti-counterfeiting performance and the difficulty of counterfeiting.","PeriodicalId":280244,"journal":{"name":"2022 3rd International Conference on Information Science, Parallel and Distributed Systems (ISPDS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128014268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-22DOI: 10.1109/ISPDS56360.2022.9874073
Weiguo Yi, Bin Ma, Siwei Ma, Heng Zhang
In this paper, an MM-HDC (Max Mean and High Density Connection) method was proposed to find the initial clustering center based on the maximum mean distance and fuse each cluster based on the high density Connection. Firstly, $Deltarho=70%$ was set to select the initial clustering centers and the mean distance was introduced. The selection of cluster centers will be stopped until the distance between the desired new mean center and some previously selected cluster centers is less than $2^{ast}d_{c}$, and the selection of initial cluster centers is completed. Then use the allocation policy of k-means to clustering all the data points by the distance between each initial clustering center and data points, constantly updated after cluster center, center for migration, until the old and the new cluster centers position changed little (the distance is very small), then stop update clustering center, and the last of the clustering results as the final clustering results. Finally, iterative fusion method is used for center fusion to get better clustering results. Experimental results of classical data sets show that the MM-HDC method is superior to the DPC algorithm and k-means algorithm, and the improved density peak clustering algorithm has higher accuracy. Moreover, The MM-HDC algorithm can obtain satisfactory results on the data set with special shape or uneven distribution.
本文提出了一种MM-HDC (Max Mean and High Density Connection)方法,基于最大平均距离找到初始聚类中心,并基于高密度连接融合各个聚类。首先,通过设置$Deltarho=70%$选择初始聚类中心,引入平均距离;直到期望的新平均中心与先前选择的一些聚类中心之间的距离小于$2^{ast}d_{c}$,然后停止聚类中心的选择,完成初始聚类中心的选择。然后使用k-means的分配策略,根据每个初始聚类中心与数据点之间的距离对所有数据点进行聚类,不断更新聚类中心后,对中心进行迁移,直到新旧聚类中心的位置变化很小(距离非常小),才停止更新聚类中心,并将最后的聚类结果作为最终聚类结果。最后,采用迭代融合方法进行中心融合,得到较好的聚类结果。经典数据集的实验结果表明,MM-HDC方法优于DPC算法和k-means算法,改进的密度峰聚类算法具有更高的准确率。此外,对于形状特殊或分布不均匀的数据集,MM-HDC算法也能获得满意的结果。
{"title":"Density Peak Clustering Algorithm Based on High Density Connection with Entropy Optimization","authors":"Weiguo Yi, Bin Ma, Siwei Ma, Heng Zhang","doi":"10.1109/ISPDS56360.2022.9874073","DOIUrl":"https://doi.org/10.1109/ISPDS56360.2022.9874073","url":null,"abstract":"In this paper, an MM-HDC (Max Mean and High Density Connection) method was proposed to find the initial clustering center based on the maximum mean distance and fuse each cluster based on the high density Connection. Firstly, $Deltarho=70%$ was set to select the initial clustering centers and the mean distance was introduced. The selection of cluster centers will be stopped until the distance between the desired new mean center and some previously selected cluster centers is less than $2^{ast}d_{c}$, and the selection of initial cluster centers is completed. Then use the allocation policy of k-means to clustering all the data points by the distance between each initial clustering center and data points, constantly updated after cluster center, center for migration, until the old and the new cluster centers position changed little (the distance is very small), then stop update clustering center, and the last of the clustering results as the final clustering results. Finally, iterative fusion method is used for center fusion to get better clustering results. Experimental results of classical data sets show that the MM-HDC method is superior to the DPC algorithm and k-means algorithm, and the improved density peak clustering algorithm has higher accuracy. Moreover, The MM-HDC algorithm can obtain satisfactory results on the data set with special shape or uneven distribution.","PeriodicalId":280244,"journal":{"name":"2022 3rd International Conference on Information Science, Parallel and Distributed Systems (ISPDS)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133899267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-22DOI: 10.1109/ISPDS56360.2022.9874195
Jihang Zhang, Jianxin Zhou, Ning Zhou
Network traffic classification plays an important role in network management. In order to improve classification accuracy of encrypted traffic, a method of encrypted network traffic classification based on subspace triple attention mechanism module is proposed. In this method, the network traffic data feature map is divided into several subspaces along the channel dimension. In each subspace, the one-dimensional feature coding calculation is carried out for the three channel branches respectively. ISCX public datasets, which including general and protocol encrypted network traffic data, is used for classification experiments. The results show that the proposed method can achieve better classification accuracy than other current methods on encrypted traffic datasets.
{"title":"Network Traffic Classification Method Based on Subspace Triple Attention Mechanism","authors":"Jihang Zhang, Jianxin Zhou, Ning Zhou","doi":"10.1109/ISPDS56360.2022.9874195","DOIUrl":"https://doi.org/10.1109/ISPDS56360.2022.9874195","url":null,"abstract":"Network traffic classification plays an important role in network management. In order to improve classification accuracy of encrypted traffic, a method of encrypted network traffic classification based on subspace triple attention mechanism module is proposed. In this method, the network traffic data feature map is divided into several subspaces along the channel dimension. In each subspace, the one-dimensional feature coding calculation is carried out for the three channel branches respectively. ISCX public datasets, which including general and protocol encrypted network traffic data, is used for classification experiments. The results show that the proposed method can achieve better classification accuracy than other current methods on encrypted traffic datasets.","PeriodicalId":280244,"journal":{"name":"2022 3rd International Conference on Information Science, Parallel and Distributed Systems (ISPDS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132463981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-22DOI: 10.1109/ISPDS56360.2022.9874169
Na Su, Yun Pan
The task of style transfer is to transfer the style of the target image to the required image. In recent years, there have been increasing studies on image style transfer. and now it is often used in animation production, software interface beautification, image expansion and the creation of a variety of styles of decorative patterns. As these algorithms are proposed, how to evaluate algorithms will become an important problem, only with evaluation can comparative analysis be carried out and algorithms can continue to develop and progress. The importance of evaluation can not be underestimated. This paper firstly systematically combs and introduces the evaluation method of algorithm from subjective and objective aspects. Subjective methods are divided into two parts: questionnaire and User Study, which mainly rely on people's subjective perception and are easily influenced by subjects. The objective method evaluates the algorithm from a more accurate perspective, mainly including relying on statistics and evaluation indicators. Secondly, this paper introduces the experiments using several evaluation methods and evaluates their effects. Finally, it points out the problems existing in the current evaluation methods to provide the direction for the follow-up research.
{"title":"Research on evaluation method of image style transfer algorithm","authors":"Na Su, Yun Pan","doi":"10.1109/ISPDS56360.2022.9874169","DOIUrl":"https://doi.org/10.1109/ISPDS56360.2022.9874169","url":null,"abstract":"The task of style transfer is to transfer the style of the target image to the required image. In recent years, there have been increasing studies on image style transfer. and now it is often used in animation production, software interface beautification, image expansion and the creation of a variety of styles of decorative patterns. As these algorithms are proposed, how to evaluate algorithms will become an important problem, only with evaluation can comparative analysis be carried out and algorithms can continue to develop and progress. The importance of evaluation can not be underestimated. This paper firstly systematically combs and introduces the evaluation method of algorithm from subjective and objective aspects. Subjective methods are divided into two parts: questionnaire and User Study, which mainly rely on people's subjective perception and are easily influenced by subjects. The objective method evaluates the algorithm from a more accurate perspective, mainly including relying on statistics and evaluation indicators. Secondly, this paper introduces the experiments using several evaluation methods and evaluates their effects. Finally, it points out the problems existing in the current evaluation methods to provide the direction for the follow-up research.","PeriodicalId":280244,"journal":{"name":"2022 3rd International Conference on Information Science, Parallel and Distributed Systems (ISPDS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115546724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-22DOI: 10.1109/ISPDS56360.2022.9874064
Dan Ji, Y. Liu, Cheng Wang
Sorting the workpiece is one of the key steps in the production practice of workpieces, and machine vision is often used in the sorting process to detect workpiece edge information and screen out other information such as noise. Aiming at the problems of gaussian filtering denoising and artificial threshold setting in traditional Canny edge detection algorithm, an improved Canny algorithm is proposed for edge detection of workpiece. The algorithm uses the MeanShift algorithm instead of Gaussian filtering, which preserves the edge information while denoising. This new algorithm uses the maximum inter-class variance (OSTU) algorithm to obtain the adaptive optimal threshold and improve the adaptability of the algorithm. Experimental results show that under the subjective visual and objective evaluation, the algorithm has significantly improved the edge detection effect of the traditional Canny algorithm.
{"title":"Research on Image Edge Detection Based on Improved Canny Operator","authors":"Dan Ji, Y. Liu, Cheng Wang","doi":"10.1109/ISPDS56360.2022.9874064","DOIUrl":"https://doi.org/10.1109/ISPDS56360.2022.9874064","url":null,"abstract":"Sorting the workpiece is one of the key steps in the production practice of workpieces, and machine vision is often used in the sorting process to detect workpiece edge information and screen out other information such as noise. Aiming at the problems of gaussian filtering denoising and artificial threshold setting in traditional Canny edge detection algorithm, an improved Canny algorithm is proposed for edge detection of workpiece. The algorithm uses the MeanShift algorithm instead of Gaussian filtering, which preserves the edge information while denoising. This new algorithm uses the maximum inter-class variance (OSTU) algorithm to obtain the adaptive optimal threshold and improve the adaptability of the algorithm. Experimental results show that under the subjective visual and objective evaluation, the algorithm has significantly improved the edge detection effect of the traditional Canny algorithm.","PeriodicalId":280244,"journal":{"name":"2022 3rd International Conference on Information Science, Parallel and Distributed Systems (ISPDS)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123883671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-22DOI: 10.1109/ISPDS56360.2022.9874167
Mian Du, Yuwei Zeng, Xun Zhu, Lanlan Zhang
When applying the popular deep learning based NL2SQL models directly in a specific scenario, problems arise due to the characteristics rooted in background knowledge. In our case, the terminologies and abbreviations in the high value payment system database are the main obstacles. In this paper, a framework that is compatible with BERT-CN and RAT-SQL is proposed for data inquiry tasks within the high value payment system, in which both BERT and RAT-SQL are state of the art models achieved great performance in many tasks. Besides that, NER and data preprocessing toolkits are introduced to align the terminologies and abbreviations with the columns and tables. Both the training and testing stages show acceptable results and the reasons are well discussed. This framework has great potential to be extended to other application scenarios with minimal modifications.
{"title":"High Value Payment System Data Inquiry Using a NL2SQL Framework","authors":"Mian Du, Yuwei Zeng, Xun Zhu, Lanlan Zhang","doi":"10.1109/ISPDS56360.2022.9874167","DOIUrl":"https://doi.org/10.1109/ISPDS56360.2022.9874167","url":null,"abstract":"When applying the popular deep learning based NL2SQL models directly in a specific scenario, problems arise due to the characteristics rooted in background knowledge. In our case, the terminologies and abbreviations in the high value payment system database are the main obstacles. In this paper, a framework that is compatible with BERT-CN and RAT-SQL is proposed for data inquiry tasks within the high value payment system, in which both BERT and RAT-SQL are state of the art models achieved great performance in many tasks. Besides that, NER and data preprocessing toolkits are introduced to align the terminologies and abbreviations with the columns and tables. Both the training and testing stages show acceptable results and the reasons are well discussed. This framework has great potential to be extended to other application scenarios with minimal modifications.","PeriodicalId":280244,"journal":{"name":"2022 3rd International Conference on Information Science, Parallel and Distributed Systems (ISPDS)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116476004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}