Lei Shi, Suzhen Xie, Yongcai Tao, Lin Wei, Yufei Gao
The review rating system provides valuable information to potential users, but it also encourages the creation of profit-driven fake reviews. Fake reviews and comments not only drive consumers to buy low-quality products or services, but also erode consumers' long-term confidence in review rating platforms. At present, two main reasons for the low detection accuracy of fake comments in recent studies are: (1) lack of feature learning of emotional intensity of text; (2) the inaccuracy of the identification of topic words in comments. To solve the above problems, we propose a novel identification method based on topic model and Att-BiLSTM mechanism. The proposed method calculates text affective and subjective values using TextBlob, incorporating the topic feature to train the classifier for fake review recognition. Comparative experiments show that the model effect is better than other models.
{"title":"Fake Review Identification Method Based on Topic Model and Att-BiLSTM","authors":"Lei Shi, Suzhen Xie, Yongcai Tao, Lin Wei, Yufei Gao","doi":"10.1145/3483845.3483881","DOIUrl":"https://doi.org/10.1145/3483845.3483881","url":null,"abstract":"The review rating system provides valuable information to potential users, but it also encourages the creation of profit-driven fake reviews. Fake reviews and comments not only drive consumers to buy low-quality products or services, but also erode consumers' long-term confidence in review rating platforms. At present, two main reasons for the low detection accuracy of fake comments in recent studies are: (1) lack of feature learning of emotional intensity of text; (2) the inaccuracy of the identification of topic words in comments. To solve the above problems, we propose a novel identification method based on topic model and Att-BiLSTM mechanism. The proposed method calculates text affective and subjective values using TextBlob, incorporating the topic feature to train the classifier for fake review recognition. Comparative experiments show that the model effect is better than other models.","PeriodicalId":134636,"journal":{"name":"Proceedings of the 2021 2nd International Conference on Control, Robotics and Intelligent System","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134477707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, with the development and application of AR technology, more and more augmented reality applications have begun to appear in education, introduction, etc. to achieve display through specific 3D models, such as popularizing human body information through human skeleton models, and introducing cars' composition information through car models. As a brand-new interactive method, augmented reality-AR system can provide more detailed information, for the human by the direct-viewing feeling, and improve the efficiency of understanding information.
{"title":"Application of AR in 3D model","authors":"Liang Ma","doi":"10.1145/3483845.3483891","DOIUrl":"https://doi.org/10.1145/3483845.3483891","url":null,"abstract":"In recent years, with the development and application of AR technology, more and more augmented reality applications have begun to appear in education, introduction, etc. to achieve display through specific 3D models, such as popularizing human body information through human skeleton models, and introducing cars' composition information through car models. As a brand-new interactive method, augmented reality-AR system can provide more detailed information, for the human by the direct-viewing feeling, and improve the efficiency of understanding information.","PeriodicalId":134636,"journal":{"name":"Proceedings of the 2021 2nd International Conference on Control, Robotics and Intelligent System","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131709280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is concerned with the efficiency of speckle match in vision deformation measurement, upon which the CUDA programming architecture, combined with the Visual Studio platform and Mex script files is utilized to implement parallel operations. With the aid of compiling the GPU parallel mode of the CUDA source program through NVCC, the scheme of speckle matching parallel computing are given, which is crucial to improve the real-time performance of vision-based deformation measurement. Consequently, the method in this paper completes the efficient calculation of match of the speckle image sub-regions in the three-dimensional deformation measurement. The proposed strategy solves the obstacle problem when the Mex script and different programming languages interact, and is not restricted by overloaded functions, so that the overall computing performance of the deformation measurement program reaches a better state. Lastly, the experimental results show that the speckle matching has achieved a calculation speedup ratio of 20.39 times.
{"title":"Parallel Accelerated Algorithm Optimization for Speckle Matching in Deformation Measurement Based on Stereo Vision","authors":"Yunhe Liu, Guiyang Zhang, Lili Wang, Jing Wang, Zijian Zhu","doi":"10.1145/3483845.3483889","DOIUrl":"https://doi.org/10.1145/3483845.3483889","url":null,"abstract":"This paper is concerned with the efficiency of speckle match in vision deformation measurement, upon which the CUDA programming architecture, combined with the Visual Studio platform and Mex script files is utilized to implement parallel operations. With the aid of compiling the GPU parallel mode of the CUDA source program through NVCC, the scheme of speckle matching parallel computing are given, which is crucial to improve the real-time performance of vision-based deformation measurement. Consequently, the method in this paper completes the efficient calculation of match of the speckle image sub-regions in the three-dimensional deformation measurement. The proposed strategy solves the obstacle problem when the Mex script and different programming languages interact, and is not restricted by overloaded functions, so that the overall computing performance of the deformation measurement program reaches a better state. Lastly, the experimental results show that the speckle matching has achieved a calculation speedup ratio of 20.39 times.","PeriodicalId":134636,"journal":{"name":"Proceedings of the 2021 2nd International Conference on Control, Robotics and Intelligent System","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130299710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we explore different models available to perform topic modelling on subtitles files. Subtitle files are sourced from movies and represent the dialogue being spoken. Applying this to topic modelling would mean trying to obtain the topics regarding the video from only the subtitles. Our novel idea is to test whether it would be feasible to use topic modelling on subtitles to get topics of a movie. While topic modelling as an idea has been used previously in bio-informatics,patent indexing and much more, has not seen any application in this sphere. We extensively search for datasets, preprocess the subtitles files and try Latent Dirichlet Allocation, Hierarchical Dirichlet Processes and Latent Semantic Indexing methods of topic modelling on these documents. These are the top three prominent topic modelling models that are used today. Our results entail what model would work best for subtitle files
{"title":"Exploratory Analysis on Topic Modelling for Video Subtitles","authors":"Atmik Ajoy, Chethan U Mahindrakar, H. Mamatha","doi":"10.1145/3483845.3483878","DOIUrl":"https://doi.org/10.1145/3483845.3483878","url":null,"abstract":"In this paper, we explore different models available to perform topic modelling on subtitles files. Subtitle files are sourced from movies and represent the dialogue being spoken. Applying this to topic modelling would mean trying to obtain the topics regarding the video from only the subtitles. Our novel idea is to test whether it would be feasible to use topic modelling on subtitles to get topics of a movie. While topic modelling as an idea has been used previously in bio-informatics,patent indexing and much more, has not seen any application in this sphere. We extensively search for datasets, preprocess the subtitles files and try Latent Dirichlet Allocation, Hierarchical Dirichlet Processes and Latent Semantic Indexing methods of topic modelling on these documents. These are the top three prominent topic modelling models that are used today. Our results entail what model would work best for subtitle files","PeriodicalId":134636,"journal":{"name":"Proceedings of the 2021 2nd International Conference on Control, Robotics and Intelligent System","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122575724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In response to the current lack of intelligence and security research on outdoor gantry cranes, the method based on the improved you-only-look-once (YOLO)v5 network for intelligent anti-intrusion detection is proposed. First an overall detection scheme is proposed. Then the following improvement tricks are made to the YOLOv5 network to achieve the highest possible detection accuracy while ensuring speed: incorporate multi-layer receptive fields and fine-grained modules into the backbone network to improve the performance of features; use dilated convolution to replace the pooling operation in the SPP module to reduce the loss of network information; further enrich the fusion of non-adjacent deep and shallow features in the network by using cross-layer connections; then use the K-means algorithm to cluster the target size to improve the positioning accuracy of the model; Finally, the non-maximum suppression algorithm is optimized by the weighting algorithm to effectively alleviate the inaccurate positioning of the YOLO series of bounding boxes. By combining multiple tricks, the improved YOLOv5s model can achieve a better balance between effectiveness (75.81% mAP) and efficiency (83 FPS) in anti-intrusion detection. At the same time, compared with the original YOLOv5s network on the VOC data set, the mAP value of the improved YOLOv5s is increased by 7.05%.
{"title":"Improved YOLOv5 network-based object detection for anti-intrusion of gantry crane","authors":"Hongchao Niu, Xiao-Bing Hu, Hang Li","doi":"10.1145/3483845.3483871","DOIUrl":"https://doi.org/10.1145/3483845.3483871","url":null,"abstract":"In response to the current lack of intelligence and security research on outdoor gantry cranes, the method based on the improved you-only-look-once (YOLO)v5 network for intelligent anti-intrusion detection is proposed. First an overall detection scheme is proposed. Then the following improvement tricks are made to the YOLOv5 network to achieve the highest possible detection accuracy while ensuring speed: incorporate multi-layer receptive fields and fine-grained modules into the backbone network to improve the performance of features; use dilated convolution to replace the pooling operation in the SPP module to reduce the loss of network information; further enrich the fusion of non-adjacent deep and shallow features in the network by using cross-layer connections; then use the K-means algorithm to cluster the target size to improve the positioning accuracy of the model; Finally, the non-maximum suppression algorithm is optimized by the weighting algorithm to effectively alleviate the inaccurate positioning of the YOLO series of bounding boxes. By combining multiple tricks, the improved YOLOv5s model can achieve a better balance between effectiveness (75.81% mAP) and efficiency (83 FPS) in anti-intrusion detection. At the same time, compared with the original YOLOv5s network on the VOC data set, the mAP value of the improved YOLOv5s is increased by 7.05%.","PeriodicalId":134636,"journal":{"name":"Proceedings of the 2021 2nd International Conference on Control, Robotics and Intelligent System","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121607307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Along with the popularization and adoption of IP in various emerging scenarios, challenges also arise with the ossified address structures. The reason is that conventional IP address is designed with fixed length and lacking extensibility, while the demand for IP varies greatly in different scenarios. Flexible IP (FlexIP), as a variable-length IP address, proactively makes address structure flexible enough to adapt to various network cases and solves the problem of low transmission efficiency faced by current IP addresses. However, due to the variable length of FlexIP, the conventional routing addressing scheme is not suitable for it. In this paper, we propose a new Bloom filter addressing scheme suitable for FlexIP address. We use controllable prefix extension to limit the prefix distribution of FlexIP, and use one-hashing to improve the computational overhead of the Bloom filter. Simulations show that the addressing scheme we proposed is more suitable for FlexIP than other schemes, and has better query efficiency.
{"title":"An Efficient Addressing Scheme for Flexible IP Address","authors":"Shi-Hai Liu, Wanming Luo, Xu Zhou, YiHao Jia, Zhe Chen, Sheng Jiang","doi":"10.1145/3483845.3483865","DOIUrl":"https://doi.org/10.1145/3483845.3483865","url":null,"abstract":"Along with the popularization and adoption of IP in various emerging scenarios, challenges also arise with the ossified address structures. The reason is that conventional IP address is designed with fixed length and lacking extensibility, while the demand for IP varies greatly in different scenarios. Flexible IP (FlexIP), as a variable-length IP address, proactively makes address structure flexible enough to adapt to various network cases and solves the problem of low transmission efficiency faced by current IP addresses. However, due to the variable length of FlexIP, the conventional routing addressing scheme is not suitable for it. In this paper, we propose a new Bloom filter addressing scheme suitable for FlexIP address. We use controllable prefix extension to limit the prefix distribution of FlexIP, and use one-hashing to improve the computational overhead of the Bloom filter. Simulations show that the addressing scheme we proposed is more suitable for FlexIP than other schemes, and has better query efficiency.","PeriodicalId":134636,"journal":{"name":"Proceedings of the 2021 2nd International Conference on Control, Robotics and Intelligent System","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130502894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Limited by the field of view (FOV), most existed X-ray industrial computed tomography (ICT) techniques require multi scans for stitching projections when detecting long objects, which significantly increases the scanning time. In addition, these techniques usually adopt the one-by-one scanning mode that further reduces the scanning efficiency. Therefore, this paper proposes a synchronized multi-helical computed tomography. It allows multi objects to be helical scanned simultaneously without signal crosstalk, while it further improves the detecting efficiency. Besides, the reconstruction method suitable for the synchronized multi-helical CT is reported. This method utilizes projection segmentation and helical projection calibration to convert multi-object helical projections into single-object projections. The generated single-object projection can be then reconstructed by conventional algorithms, e.g. the filtered back projection (FBP). This work can improve the efficiency of CT scanning and will promote the applications of CT in large-scale long object detection.
{"title":"Synchronized Multi-Helical Computed Tomography","authors":"Changsheng Zhang, Guogang Zhu, Jian Fu","doi":"10.1145/3483845.3483883","DOIUrl":"https://doi.org/10.1145/3483845.3483883","url":null,"abstract":"Limited by the field of view (FOV), most existed X-ray industrial computed tomography (ICT) techniques require multi scans for stitching projections when detecting long objects, which significantly increases the scanning time. In addition, these techniques usually adopt the one-by-one scanning mode that further reduces the scanning efficiency. Therefore, this paper proposes a synchronized multi-helical computed tomography. It allows multi objects to be helical scanned simultaneously without signal crosstalk, while it further improves the detecting efficiency. Besides, the reconstruction method suitable for the synchronized multi-helical CT is reported. This method utilizes projection segmentation and helical projection calibration to convert multi-object helical projections into single-object projections. The generated single-object projection can be then reconstructed by conventional algorithms, e.g. the filtered back projection (FBP). This work can improve the efficiency of CT scanning and will promote the applications of CT in large-scale long object detection.","PeriodicalId":134636,"journal":{"name":"Proceedings of the 2021 2nd International Conference on Control, Robotics and Intelligent System","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133573043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generative Adversarial Networks (GANs) are algorithmic architectures that use two neural networks, pitting one against the opposite (thus the “adversarial”) so as to come up with new, synthetic instances of data that can pass for real data. GANs have been highly successful on datasets like MNIST, SVHN, CelebA, etc but training a GAN on large scale datasets like ImageNet is a challenging problem because they are deemed as not very regular. In this paper, we perform empirical experiments using parameterized synthetic datasets to probe how regularity of a dataset affects learning in GANs. We emperically show that regular datasets are easier to model for GANs because of their stable training process.
{"title":"Effect of regularity on learning in GANs","authors":"Niladri Shekhar Dutt, S. Patel","doi":"10.1145/3483845.3483874","DOIUrl":"https://doi.org/10.1145/3483845.3483874","url":null,"abstract":"Generative Adversarial Networks (GANs) are algorithmic architectures that use two neural networks, pitting one against the opposite (thus the “adversarial”) so as to come up with new, synthetic instances of data that can pass for real data. GANs have been highly successful on datasets like MNIST, SVHN, CelebA, etc but training a GAN on large scale datasets like ImageNet is a challenging problem because they are deemed as not very regular. In this paper, we perform empirical experiments using parameterized synthetic datasets to probe how regularity of a dataset affects learning in GANs. We emperically show that regular datasets are easier to model for GANs because of their stable training process.","PeriodicalId":134636,"journal":{"name":"Proceedings of the 2021 2nd International Conference on Control, Robotics and Intelligent System","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133378821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep neural networks have achieved remarkable results in large-scale data domain. However, they have not performed well on few-shot image classification tasks. Here we propose a new meta-learning approach composed of an embedding network and a linear classifier learner. During the training phase, our approach (called Transformation Network) learns to learn a classifier by transforming the feature vectors produced by the embedding module. Once trained, a Transformation Network is able to classify images of new classes by the learned classifier. The ability of learning a discriminatively trained classifier could make our architecture adapt fast to new examples from unseen classes. We further describe implementation details upon the architecture convolutional networks and linear transformation operations. We demonstrate that our approach achieves improved performance on few-shot image classification tasks on two benchmarks and a self-made dataset.
{"title":"Learning A Linear Classifier by Transforming Feature Vectors for Few-shot Image Classification","authors":"Wanrong Huang, Yaqing Hu, Shuofeng Hu, Jingde Liu","doi":"10.1145/3483845.3483873","DOIUrl":"https://doi.org/10.1145/3483845.3483873","url":null,"abstract":"Deep neural networks have achieved remarkable results in large-scale data domain. However, they have not performed well on few-shot image classification tasks. Here we propose a new meta-learning approach composed of an embedding network and a linear classifier learner. During the training phase, our approach (called Transformation Network) learns to learn a classifier by transforming the feature vectors produced by the embedding module. Once trained, a Transformation Network is able to classify images of new classes by the learned classifier. The ability of learning a discriminatively trained classifier could make our architecture adapt fast to new examples from unseen classes. We further describe implementation details upon the architecture convolutional networks and linear transformation operations. We demonstrate that our approach achieves improved performance on few-shot image classification tasks on two benchmarks and a self-made dataset.","PeriodicalId":134636,"journal":{"name":"Proceedings of the 2021 2nd International Conference on Control, Robotics and Intelligent System","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132440671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}