At this pandemic period, for the safety demand of emigration, footprint tracking of disease carrier, pandemic control…etc., it is urgent as well as important to do an automatic recognition of a person with mask. This study uses Mel-frequency Cep-strum technic to simulate and extract human features; uses big data technician of supervising learning method and VQGMM to find out the impact factors of human features that affecting human recognition hit rate. This study using same algorithm to do four time of testing with mask and without mask. The study result show, after supervising training, the testing result of the people with mask is better than without mask which gave evidence of the algorithms of this study is robust.
{"title":"Facial recognition with mask during pandemic period by big data technical of GMM","authors":"Su-Tzu Hsieh, Chin-Ta Chen","doi":"10.1145/3503047.3503090","DOIUrl":"https://doi.org/10.1145/3503047.3503090","url":null,"abstract":"At this pandemic period, for the safety demand of emigration, footprint tracking of disease carrier, pandemic control…etc., it is urgent as well as important to do an automatic recognition of a person with mask. This study uses Mel-frequency Cep-strum technic to simulate and extract human features; uses big data technician of supervising learning method and VQGMM to find out the impact factors of human features that affecting human recognition hit rate. This study using same algorithm to do four time of testing with mask and without mask. The study result show, after supervising training, the testing result of the people with mask is better than without mask which gave evidence of the algorithms of this study is robust.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133011090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Marín, M.G. Serna-Díaz, J. Mora, N. Hernández-Romero, Irving Barragán-Vite, Cinthia Montano-Lara
∗Traditionally, databases are introduced to store information as a repository of data; however, users are responsible to add, remove, and modify database records. In order to provide reactiveness to passive database systems, the concept of active database was introduced. Active behavior can be denoted via Event-Condition-Action (ECA) rules. Nevertheless, ECA-rules may concatenate, producing loops in the rule’s firing and, in consequence, inconsistent states in the database system. This situation is known as the No-Termination problem. In this paper, a recursive algorithm based on Petri Nets to detect the No-Termination problem is proposed. The algorithm takes into account a Petri Net representation for ECA rules and composite events. Furthermore, an execution time analysis of the algorithm is carried out for sets of ECA rules with several cycles.
{"title":"Static Analysis for the No Termination Problem in Active Databases by Using Petri Nets Modelling","authors":"J. Marín, M.G. Serna-Díaz, J. Mora, N. Hernández-Romero, Irving Barragán-Vite, Cinthia Montano-Lara","doi":"10.1145/3503047.3503152","DOIUrl":"https://doi.org/10.1145/3503047.3503152","url":null,"abstract":"∗Traditionally, databases are introduced to store information as a repository of data; however, users are responsible to add, remove, and modify database records. In order to provide reactiveness to passive database systems, the concept of active database was introduced. Active behavior can be denoted via Event-Condition-Action (ECA) rules. Nevertheless, ECA-rules may concatenate, producing loops in the rule’s firing and, in consequence, inconsistent states in the database system. This situation is known as the No-Termination problem. In this paper, a recursive algorithm based on Petri Nets to detect the No-Termination problem is proposed. The algorithm takes into account a Petri Net representation for ECA rules and composite events. Furthermore, an execution time analysis of the algorithm is carried out for sets of ECA rules with several cycles.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133068999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rapid prediction of concentration in mixed gas is a challenging task in the field of gas sensing. In view of the large error of mixed gas concentration prediction due to the nonlinear response characteristics of sensor array to gas, a prediction model of mixed gas concentration based on Convolutional Neural Network and Long-Short Term Memory is proposed, which has good time series processing ability. The sensor data of carbon monoxide and ethylene are used as the input of this model, RMSE and R2 are used as evaluation indicators. Experimental results show that the accuracy R2 of mixture concentration prediction can reach 0.99 in a short response time of 20 seconds. In addition, RMSE of carbon monoxide and ethylene is 11.4 ppm and 1.6 ppm, respectively. Relative to their maximum presented concentrations, the error ratio is 2.1% and 8%, respectively. Compared with the conventional machine learning algorithms including reservoir-computing and support vector regression (SVR), this method has certain advantages in concentration prediction accuracy and detection time, effectively solves the cross-sensitivity characteristics of MOX sensors, and reduces the measurement delay.
{"title":"Research on prediction model of mixed gas concentration based on CNN-LSTM network","authors":"Mengya Li, Juan He, Rong Zhou, Li Ning, Yan Liang","doi":"10.1145/3503047.3503110","DOIUrl":"https://doi.org/10.1145/3503047.3503110","url":null,"abstract":"Rapid prediction of concentration in mixed gas is a challenging task in the field of gas sensing. In view of the large error of mixed gas concentration prediction due to the nonlinear response characteristics of sensor array to gas, a prediction model of mixed gas concentration based on Convolutional Neural Network and Long-Short Term Memory is proposed, which has good time series processing ability. The sensor data of carbon monoxide and ethylene are used as the input of this model, RMSE and R2 are used as evaluation indicators. Experimental results show that the accuracy R2 of mixture concentration prediction can reach 0.99 in a short response time of 20 seconds. In addition, RMSE of carbon monoxide and ethylene is 11.4 ppm and 1.6 ppm, respectively. Relative to their maximum presented concentrations, the error ratio is 2.1% and 8%, respectively. Compared with the conventional machine learning algorithms including reservoir-computing and support vector regression (SVR), this method has certain advantages in concentration prediction accuracy and detection time, effectively solves the cross-sensitivity characteristics of MOX sensors, and reduces the measurement delay.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131196055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xianbin Hong, S. Guan, Prudence W. H. Wong, Nian Xue, K. Man, Dawei Liu, Zhen Li
Reading product reviews is the best way to know the product quality in online shopping. Due to the huge review number, customers and merchants need product analysis algorithms to help with quality analysis. Current researches use sentiment analysis to replace quality analysis. However, it has a significant drawback. This paper proves that the sentiment-based analysis algorithms are insufficient for online product quality analysis. They ignore the relationship between aspect and its description and cannot detect noise (unrelated description). So this paper raises a Lifelong Product Quality Analysis algorithm LPQA to learn the relationship between aspects. It can detect the noise and improve the opinion classification performance. It improves the classification F1 score to 77.3% on the Amazon iPhone dataset and 69.99% on Semeval Laptop dataset.
{"title":"Lifelong Machine Learning-Based Quality Analysis for Product Review","authors":"Xianbin Hong, S. Guan, Prudence W. H. Wong, Nian Xue, K. Man, Dawei Liu, Zhen Li","doi":"10.1145/3503047.3503154","DOIUrl":"https://doi.org/10.1145/3503047.3503154","url":null,"abstract":"Reading product reviews is the best way to know the product quality in online shopping. Due to the huge review number, customers and merchants need product analysis algorithms to help with quality analysis. Current researches use sentiment analysis to replace quality analysis. However, it has a significant drawback. This paper proves that the sentiment-based analysis algorithms are insufficient for online product quality analysis. They ignore the relationship between aspect and its description and cannot detect noise (unrelated description). So this paper raises a Lifelong Product Quality Analysis algorithm LPQA to learn the relationship between aspects. It can detect the noise and improve the opinion classification performance. It improves the classification F1 score to 77.3% on the Amazon iPhone dataset and 69.99% on Semeval Laptop dataset.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116301024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Magnetic resonance imaging (MRI) is widely used in clinical medical auxiliary diagnosis. In acquiring images by MRI machines, patients usually need to be exposed to harmful radiation. The radiation dose can be reduced by reducing the resolution of MRI images. This paper analyzes the super-resolution of low-resolution MRI images based on a deep learning algorithm to ensure the pixel quality of the MRI image required for medical diagnosis. It then reconstructs high-resolution MRI images as an alternative method to reduce radiation dose. This paper studies how to improve the resolution of low-dose MRI by 4 times through super-resolution analysis based on deep learning technology without other available information. This paper constructs a data set close to the natural low-high resolution image pair through degenerate kernel estimation and noise injection and constructs a two-layer generated countermeasure network based on the design ideas of ESRGAN, PatchGAN, and VGG-19. The test shows that our method is better than EDSR, RCAN, and ESRGAN in comparing non-reference image quality evaluation indexes.
{"title":"x4 Super-Resolution Analysis of Magnetic Resonance Imaging based on Generative Adversarial Network without Supervised Images","authors":"Yunhe Li, Huiyan Zhao, Bo Li, Yi Wang","doi":"10.1145/3503047.3503064","DOIUrl":"https://doi.org/10.1145/3503047.3503064","url":null,"abstract":"Magnetic resonance imaging (MRI) is widely used in clinical medical auxiliary diagnosis. In acquiring images by MRI machines, patients usually need to be exposed to harmful radiation. The radiation dose can be reduced by reducing the resolution of MRI images. This paper analyzes the super-resolution of low-resolution MRI images based on a deep learning algorithm to ensure the pixel quality of the MRI image required for medical diagnosis. It then reconstructs high-resolution MRI images as an alternative method to reduce radiation dose. This paper studies how to improve the resolution of low-dose MRI by 4 times through super-resolution analysis based on deep learning technology without other available information. This paper constructs a data set close to the natural low-high resolution image pair through degenerate kernel estimation and noise injection and constructs a two-layer generated countermeasure network based on the design ideas of ESRGAN, PatchGAN, and VGG-19. The test shows that our method is better than EDSR, RCAN, and ESRGAN in comparing non-reference image quality evaluation indexes.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116323312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of e-commerce, the types of commodities are becoming more diversified. Classification of commodities based on aesthetic attributes such as style is an important supplement to traditional classification techniques. Aiming at the problems of an unclear definition of furniture image style features, difficulty in extraction, and poor classification effect of general models, we design a furniture image classification model FISC based on Gram transformation. The FISC model is based on convolutional neural network technology, which extracts high-level content features of the image and performs Gram transformation as style features and inputs to the classifier for classification and recognition. At present, there are few public image style data sets. In this study, we build a data set of furniture image style attribute tags for the objectivity and pertinence of the experiment. The model has been fully experimentally compared, and the accuracy of the final training set and test set are 99.23% and 94% respectively, which fully verifies the superior performance of the FISC model on the task of furniture image style classification.
{"title":"FISC: Furniture image style classification model based on Gram transformation","authors":"Xin Du","doi":"10.1145/3503047.3503071","DOIUrl":"https://doi.org/10.1145/3503047.3503071","url":null,"abstract":"With the development of e-commerce, the types of commodities are becoming more diversified. Classification of commodities based on aesthetic attributes such as style is an important supplement to traditional classification techniques. Aiming at the problems of an unclear definition of furniture image style features, difficulty in extraction, and poor classification effect of general models, we design a furniture image classification model FISC based on Gram transformation. The FISC model is based on convolutional neural network technology, which extracts high-level content features of the image and performs Gram transformation as style features and inputs to the classifier for classification and recognition. At present, there are few public image style data sets. In this study, we build a data set of furniture image style attribute tags for the objectivity and pertinence of the experiment. The model has been fully experimentally compared, and the accuracy of the final training set and test set are 99.23% and 94% respectively, which fully verifies the superior performance of the FISC model on the task of furniture image style classification.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127170966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In view of the bandwidth consumption caused by data stream transmission in video analysis system and the demand for accurate online real-time analysis of massive data, this paper proposes a deep learning model framework for face recognition employed in the embedded system. Through data collaboration, the cloud could build a more complex data set with a small amount of uploaded data gathered by the end devices. And the framework collaboration makes sure that the fully-trained cloud model directly download or distillate knowledge to the end devices. Experiments show that the deep model not only realizes the real-time response and the accurate response of the cloud system, but also greatly reduces the bandwidth consumption caused by sample data transmission in the model training process.
{"title":"Distributed Deep Learning System for Efficient Face Recognition in Surveillance System","authors":"Jinjin Liu, Zhifeng Chen, Xiaonan Li, Tongxin Wei","doi":"10.1145/3503047.3503130","DOIUrl":"https://doi.org/10.1145/3503047.3503130","url":null,"abstract":"In view of the bandwidth consumption caused by data stream transmission in video analysis system and the demand for accurate online real-time analysis of massive data, this paper proposes a deep learning model framework for face recognition employed in the embedded system. Through data collaboration, the cloud could build a more complex data set with a small amount of uploaded data gathered by the end devices. And the framework collaboration makes sure that the fully-trained cloud model directly download or distillate knowledge to the end devices. Experiments show that the deep model not only realizes the real-time response and the accurate response of the cloud system, but also greatly reduces the bandwidth consumption caused by sample data transmission in the model training process.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126662021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
∗An anonymous communication system is an overlay network that hides the address of the destination server through multiple relay routing communications. As communication entities are difficult to track and locate, a large number of harmful social security activities such as leakage of personal information, drug dealings, and terrorist activities have occurred. Traffic recognition technology can locate illegal activities from anonymous user communications and help law enforcement agencies investigate criminal activities on the darknet. Currently, the existing research mainly focuses on traditional traffic classification, encrypted traffic analysis, and tor traffic identification, but there is a lack of comprehensive research and investigation on darknet traffic identification. This paper summarizes darknet traffic classification methods based on deep learning and machine learning, reviews common public data sets, and discusses open problems and challenges in this field.
{"title":"A Survey on Anonymous Communication Systems Traffic Identification and Classification","authors":"Ruonan Wang, Yuefeng Zhao","doi":"10.1145/3503047.3503087","DOIUrl":"https://doi.org/10.1145/3503047.3503087","url":null,"abstract":"∗An anonymous communication system is an overlay network that hides the address of the destination server through multiple relay routing communications. As communication entities are difficult to track and locate, a large number of harmful social security activities such as leakage of personal information, drug dealings, and terrorist activities have occurred. Traffic recognition technology can locate illegal activities from anonymous user communications and help law enforcement agencies investigate criminal activities on the darknet. Currently, the existing research mainly focuses on traditional traffic classification, encrypted traffic analysis, and tor traffic identification, but there is a lack of comprehensive research and investigation on darknet traffic identification. This paper summarizes darknet traffic classification methods based on deep learning and machine learning, reviews common public data sets, and discusses open problems and challenges in this field.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129062578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federated learning is a distributed machine learning framework where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. Non-independent and identically distributed data across clients is one of the challenges in federated learning applications which leads to a decline in model accuracy and modeling efficiency. We present a clustered federated learning algorithm based on data distribution and conduct an empirical evaluation. To protect the privacy of data in each client, we apply the encrypted distance computing algorithm in data set similarity measurement. The data experiments demonstrate the approach is effective for improving the accuracy and efficiency of federated learning. The AUC values of the clustered model is about 15% higher than the conventional model while the time cost of clustered modeling is less than 1/2 of that of conventional modeling.
{"title":"Clustered Federated Learning Based on Data Distribution","authors":"Lu Yu, Wenjing Nie, Lun Xin, M. Guo","doi":"10.1145/3503047.3503102","DOIUrl":"https://doi.org/10.1145/3503047.3503102","url":null,"abstract":"Federated learning is a distributed machine learning framework where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. Non-independent and identically distributed data across clients is one of the challenges in federated learning applications which leads to a decline in model accuracy and modeling efficiency. We present a clustered federated learning algorithm based on data distribution and conduct an empirical evaluation. To protect the privacy of data in each client, we apply the encrypted distance computing algorithm in data set similarity measurement. The data experiments demonstrate the approach is effective for improving the accuracy and efficiency of federated learning. The AUC values of the clustered model is about 15% higher than the conventional model while the time cost of clustered modeling is less than 1/2 of that of conventional modeling.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123470899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern commercial aircraft have become more and more software-controlled. The use of physical media to distribute and control on-board loadable software is inefficient and costly. The paper studied the traditional software distribution and control process, and proposed a VPN and wireless-based digital solution framework by applying the State of the Art, including electronic signatures, data encryption, network security, artificial Intelligence(AI), and digital twin technology. The solutions can significantly enhance the ability of manufacturers and operators to manage the on-board loadable software, reduce the time spent in copying and distributing the physical media, which can also contribute to aircraft predictive maintenance.
{"title":"Commercial Aircraft On-Board Loadable Software Distribution and Control Digital Solution","authors":"Lei Zhang, J. Sun, Lingchen Li, Jinling Cheng","doi":"10.1145/3503047.3503053","DOIUrl":"https://doi.org/10.1145/3503047.3503053","url":null,"abstract":"Modern commercial aircraft have become more and more software-controlled. The use of physical media to distribute and control on-board loadable software is inefficient and costly. The paper studied the traditional software distribution and control process, and proposed a VPN and wireless-based digital solution framework by applying the State of the Art, including electronic signatures, data encryption, network security, artificial Intelligence(AI), and digital twin technology. The solutions can significantly enhance the ability of manufacturers and operators to manage the on-board loadable software, reduce the time spent in copying and distributing the physical media, which can also contribute to aircraft predictive maintenance.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114956549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}