J. Marín, M.G. Serna-Díaz, J. Mora, N. Hernández-Romero, Irving Barragán-Vite, Cinthia Montano-Lara
∗Traditionally, databases are introduced to store information as a repository of data; however, users are responsible to add, remove, and modify database records. In order to provide reactiveness to passive database systems, the concept of active database was introduced. Active behavior can be denoted via Event-Condition-Action (ECA) rules. Nevertheless, ECA-rules may concatenate, producing loops in the rule’s firing and, in consequence, inconsistent states in the database system. This situation is known as the No-Termination problem. In this paper, a recursive algorithm based on Petri Nets to detect the No-Termination problem is proposed. The algorithm takes into account a Petri Net representation for ECA rules and composite events. Furthermore, an execution time analysis of the algorithm is carried out for sets of ECA rules with several cycles.
{"title":"Static Analysis for the No Termination Problem in Active Databases by Using Petri Nets Modelling","authors":"J. Marín, M.G. Serna-Díaz, J. Mora, N. Hernández-Romero, Irving Barragán-Vite, Cinthia Montano-Lara","doi":"10.1145/3503047.3503152","DOIUrl":"https://doi.org/10.1145/3503047.3503152","url":null,"abstract":"∗Traditionally, databases are introduced to store information as a repository of data; however, users are responsible to add, remove, and modify database records. In order to provide reactiveness to passive database systems, the concept of active database was introduced. Active behavior can be denoted via Event-Condition-Action (ECA) rules. Nevertheless, ECA-rules may concatenate, producing loops in the rule’s firing and, in consequence, inconsistent states in the database system. This situation is known as the No-Termination problem. In this paper, a recursive algorithm based on Petri Nets to detect the No-Termination problem is proposed. The algorithm takes into account a Petri Net representation for ECA rules and composite events. Furthermore, an execution time analysis of the algorithm is carried out for sets of ECA rules with several cycles.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133068999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At this pandemic period, for the safety demand of emigration, footprint tracking of disease carrier, pandemic control…etc., it is urgent as well as important to do an automatic recognition of a person with mask. This study uses Mel-frequency Cep-strum technic to simulate and extract human features; uses big data technician of supervising learning method and VQGMM to find out the impact factors of human features that affecting human recognition hit rate. This study using same algorithm to do four time of testing with mask and without mask. The study result show, after supervising training, the testing result of the people with mask is better than without mask which gave evidence of the algorithms of this study is robust.
{"title":"Facial recognition with mask during pandemic period by big data technical of GMM","authors":"Su-Tzu Hsieh, Chin-Ta Chen","doi":"10.1145/3503047.3503090","DOIUrl":"https://doi.org/10.1145/3503047.3503090","url":null,"abstract":"At this pandemic period, for the safety demand of emigration, footprint tracking of disease carrier, pandemic control…etc., it is urgent as well as important to do an automatic recognition of a person with mask. This study uses Mel-frequency Cep-strum technic to simulate and extract human features; uses big data technician of supervising learning method and VQGMM to find out the impact factors of human features that affecting human recognition hit rate. This study using same algorithm to do four time of testing with mask and without mask. The study result show, after supervising training, the testing result of the people with mask is better than without mask which gave evidence of the algorithms of this study is robust.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133011090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the complexity of network structures and operation conditions, new challenge is brought to power system stability.Conventional linear controllers based on the small signal method were designed via a linearization technique at the equilibrium point,which may not guarantee the stability when the equilibrium point changed or a large disturbance occurred. To overcome this problem, robust controller with the application of nonlinear control theory has been proposed to improve the transient stability. In this paper, a novel approach based on Takagi-Sugeno(T-S) fuzzy model is proposed to design a nonlinear controller for a power system. T-S fuzzy models are constructed as a exact representation of the power system. The controllers for each sub-model is designed based on the concept of parallel distributed compensation(PDC). The controller design problems can be reduced to linear matrix inequality (LMI) problems and can be solved efficiently in practice by convex programming techniques for LMIs. Simulation result illustrates effectiveness of the proposed method.
{"title":"Nonlinear Controller Design for Power System via TS Fuzzy Model","authors":"Weiwei Zhang, Feng Gao, Haoming Liu","doi":"10.1145/3503047.3503059","DOIUrl":"https://doi.org/10.1145/3503047.3503059","url":null,"abstract":"With the complexity of network structures and operation conditions, new challenge is brought to power system stability.Conventional linear controllers based on the small signal method were designed via a linearization technique at the equilibrium point,which may not guarantee the stability when the equilibrium point changed or a large disturbance occurred. To overcome this problem, robust controller with the application of nonlinear control theory has been proposed to improve the transient stability. In this paper, a novel approach based on Takagi-Sugeno(T-S) fuzzy model is proposed to design a nonlinear controller for a power system. T-S fuzzy models are constructed as a exact representation of the power system. The controllers for each sub-model is designed based on the concept of parallel distributed compensation(PDC). The controller design problems can be reduced to linear matrix inequality (LMI) problems and can be solved efficiently in practice by convex programming techniques for LMIs. Simulation result illustrates effectiveness of the proposed method.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126186461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern commercial aircraft have become more and more software-controlled. The use of physical media to distribute and control on-board loadable software is inefficient and costly. The paper studied the traditional software distribution and control process, and proposed a VPN and wireless-based digital solution framework by applying the State of the Art, including electronic signatures, data encryption, network security, artificial Intelligence(AI), and digital twin technology. The solutions can significantly enhance the ability of manufacturers and operators to manage the on-board loadable software, reduce the time spent in copying and distributing the physical media, which can also contribute to aircraft predictive maintenance.
{"title":"Commercial Aircraft On-Board Loadable Software Distribution and Control Digital Solution","authors":"Lei Zhang, J. Sun, Lingchen Li, Jinling Cheng","doi":"10.1145/3503047.3503053","DOIUrl":"https://doi.org/10.1145/3503047.3503053","url":null,"abstract":"Modern commercial aircraft have become more and more software-controlled. The use of physical media to distribute and control on-board loadable software is inefficient and costly. The paper studied the traditional software distribution and control process, and proposed a VPN and wireless-based digital solution framework by applying the State of the Art, including electronic signatures, data encryption, network security, artificial Intelligence(AI), and digital twin technology. The solutions can significantly enhance the ability of manufacturers and operators to manage the on-board loadable software, reduce the time spent in copying and distributing the physical media, which can also contribute to aircraft predictive maintenance.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114956549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gang Chen, Su-jun Wang, Y. Ping, Yi Jin, Changzhi Xu, Ying-zhao Shao, Zhao Han
The external commercial broadcast illuminators are not designed for radar, and the illuminators of opportunity usually have Dopplervarying structures. These structures usually cause ambiguity sidelobes in Doppler dimension. To solve the ambiguity sidelobe problem, a method of mismatched filtering that deals with the ambiguity Doppler sidelobes is proposed. In this new algorithm, the mismatched filtering factor is acquired based on minimizing the signal energy loss and the total energy of the ambiguity Doppler sidelobes. The experimental result shows the effectiveness of the proposed algorithm. CCS CONCEPTS • Computing methodologies; • Distributed computing methodologies;
{"title":"Mismatched filtering for Doppler ambiguity sidelobe suppression in passive bistatic radar","authors":"Gang Chen, Su-jun Wang, Y. Ping, Yi Jin, Changzhi Xu, Ying-zhao Shao, Zhao Han","doi":"10.1145/3503047.3503115","DOIUrl":"https://doi.org/10.1145/3503047.3503115","url":null,"abstract":"The external commercial broadcast illuminators are not designed for radar, and the illuminators of opportunity usually have Dopplervarying structures. These structures usually cause ambiguity sidelobes in Doppler dimension. To solve the ambiguity sidelobe problem, a method of mismatched filtering that deals with the ambiguity Doppler sidelobes is proposed. In this new algorithm, the mismatched filtering factor is acquired based on minimizing the signal energy loss and the total energy of the ambiguity Doppler sidelobes. The experimental result shows the effectiveness of the proposed algorithm. CCS CONCEPTS • Computing methodologies; • Distributed computing methodologies;","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"61 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131059080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xianbin Hong, S. Guan, Prudence W. H. Wong, Nian Xue, K. Man, Dawei Liu, Zhen Li
Reading product reviews is the best way to know the product quality in online shopping. Due to the huge review number, customers and merchants need product analysis algorithms to help with quality analysis. Current researches use sentiment analysis to replace quality analysis. However, it has a significant drawback. This paper proves that the sentiment-based analysis algorithms are insufficient for online product quality analysis. They ignore the relationship between aspect and its description and cannot detect noise (unrelated description). So this paper raises a Lifelong Product Quality Analysis algorithm LPQA to learn the relationship between aspects. It can detect the noise and improve the opinion classification performance. It improves the classification F1 score to 77.3% on the Amazon iPhone dataset and 69.99% on Semeval Laptop dataset.
{"title":"Lifelong Machine Learning-Based Quality Analysis for Product Review","authors":"Xianbin Hong, S. Guan, Prudence W. H. Wong, Nian Xue, K. Man, Dawei Liu, Zhen Li","doi":"10.1145/3503047.3503154","DOIUrl":"https://doi.org/10.1145/3503047.3503154","url":null,"abstract":"Reading product reviews is the best way to know the product quality in online shopping. Due to the huge review number, customers and merchants need product analysis algorithms to help with quality analysis. Current researches use sentiment analysis to replace quality analysis. However, it has a significant drawback. This paper proves that the sentiment-based analysis algorithms are insufficient for online product quality analysis. They ignore the relationship between aspect and its description and cannot detect noise (unrelated description). So this paper raises a Lifelong Product Quality Analysis algorithm LPQA to learn the relationship between aspects. It can detect the noise and improve the opinion classification performance. It improves the classification F1 score to 77.3% on the Amazon iPhone dataset and 69.99% on Semeval Laptop dataset.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116301024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Magnetic resonance imaging (MRI) is widely used in clinical medical auxiliary diagnosis. In acquiring images by MRI machines, patients usually need to be exposed to harmful radiation. The radiation dose can be reduced by reducing the resolution of MRI images. This paper analyzes the super-resolution of low-resolution MRI images based on a deep learning algorithm to ensure the pixel quality of the MRI image required for medical diagnosis. It then reconstructs high-resolution MRI images as an alternative method to reduce radiation dose. This paper studies how to improve the resolution of low-dose MRI by 4 times through super-resolution analysis based on deep learning technology without other available information. This paper constructs a data set close to the natural low-high resolution image pair through degenerate kernel estimation and noise injection and constructs a two-layer generated countermeasure network based on the design ideas of ESRGAN, PatchGAN, and VGG-19. The test shows that our method is better than EDSR, RCAN, and ESRGAN in comparing non-reference image quality evaluation indexes.
{"title":"x4 Super-Resolution Analysis of Magnetic Resonance Imaging based on Generative Adversarial Network without Supervised Images","authors":"Yunhe Li, Huiyan Zhao, Bo Li, Yi Wang","doi":"10.1145/3503047.3503064","DOIUrl":"https://doi.org/10.1145/3503047.3503064","url":null,"abstract":"Magnetic resonance imaging (MRI) is widely used in clinical medical auxiliary diagnosis. In acquiring images by MRI machines, patients usually need to be exposed to harmful radiation. The radiation dose can be reduced by reducing the resolution of MRI images. This paper analyzes the super-resolution of low-resolution MRI images based on a deep learning algorithm to ensure the pixel quality of the MRI image required for medical diagnosis. It then reconstructs high-resolution MRI images as an alternative method to reduce radiation dose. This paper studies how to improve the resolution of low-dose MRI by 4 times through super-resolution analysis based on deep learning technology without other available information. This paper constructs a data set close to the natural low-high resolution image pair through degenerate kernel estimation and noise injection and constructs a two-layer generated countermeasure network based on the design ideas of ESRGAN, PatchGAN, and VGG-19. The test shows that our method is better than EDSR, RCAN, and ESRGAN in comparing non-reference image quality evaluation indexes.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116323312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federated learning is a distributed machine learning framework where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. Non-independent and identically distributed data across clients is one of the challenges in federated learning applications which leads to a decline in model accuracy and modeling efficiency. We present a clustered federated learning algorithm based on data distribution and conduct an empirical evaluation. To protect the privacy of data in each client, we apply the encrypted distance computing algorithm in data set similarity measurement. The data experiments demonstrate the approach is effective for improving the accuracy and efficiency of federated learning. The AUC values of the clustered model is about 15% higher than the conventional model while the time cost of clustered modeling is less than 1/2 of that of conventional modeling.
{"title":"Clustered Federated Learning Based on Data Distribution","authors":"Lu Yu, Wenjing Nie, Lun Xin, M. Guo","doi":"10.1145/3503047.3503102","DOIUrl":"https://doi.org/10.1145/3503047.3503102","url":null,"abstract":"Federated learning is a distributed machine learning framework where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. Non-independent and identically distributed data across clients is one of the challenges in federated learning applications which leads to a decline in model accuracy and modeling efficiency. We present a clustered federated learning algorithm based on data distribution and conduct an empirical evaluation. To protect the privacy of data in each client, we apply the encrypted distance computing algorithm in data set similarity measurement. The data experiments demonstrate the approach is effective for improving the accuracy and efficiency of federated learning. The AUC values of the clustered model is about 15% higher than the conventional model while the time cost of clustered modeling is less than 1/2 of that of conventional modeling.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123470899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recovering a clear image from single hazy image has been widely investigated in recent researches. Due to the lack of the real hazed image dataset, most studies use artificially synthesized dataset to train the models. Nonetheless, the real word foggy image is far different from the synthesized image. As a result, the existing methods could not defog the real foggy image well, when inputting the real foggy images. In this paper, we introduce a new dehazing algorithm, which adds cycle consistency constraints to the generative adversarial network (GAN). It implements the translation from foggy images to clean images without supervised learning, that is, the model does not need paired data to training. We assume that clear and foggy images come from different domains. There are two generators that act as domain translators, one from foggy image domain to clean image domain, and the other from foggy image to clean image. Two discriminators in the GAN are used for assessing each domain translator. The GAN loss, combined with the cycle consistency loss are used to regularize the model. We carried out experiments to evaluate the proposed method, and the results demonstrate the effectiveness in dehazing and there is indeed difference between the real-fog images and the synthetic images.
{"title":"Image Dehazing Via Cycle Generative Adversarial Network","authors":"Changyou Shi, Jianping Lu, Qian Sun, Shiliang Cheng, Xin Feng, Wei Huang","doi":"10.1145/3503047.3503135","DOIUrl":"https://doi.org/10.1145/3503047.3503135","url":null,"abstract":"Recovering a clear image from single hazy image has been widely investigated in recent researches. Due to the lack of the real hazed image dataset, most studies use artificially synthesized dataset to train the models. Nonetheless, the real word foggy image is far different from the synthesized image. As a result, the existing methods could not defog the real foggy image well, when inputting the real foggy images. In this paper, we introduce a new dehazing algorithm, which adds cycle consistency constraints to the generative adversarial network (GAN). It implements the translation from foggy images to clean images without supervised learning, that is, the model does not need paired data to training. We assume that clear and foggy images come from different domains. There are two generators that act as domain translators, one from foggy image domain to clean image domain, and the other from foggy image to clean image. Two discriminators in the GAN are used for assessing each domain translator. The GAN loss, combined with the cycle consistency loss are used to regularize the model. We carried out experiments to evaluate the proposed method, and the results demonstrate the effectiveness in dehazing and there is indeed difference between the real-fog images and the synthetic images.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125628176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of e-commerce, the types of commodities are becoming more diversified. Classification of commodities based on aesthetic attributes such as style is an important supplement to traditional classification techniques. Aiming at the problems of an unclear definition of furniture image style features, difficulty in extraction, and poor classification effect of general models, we design a furniture image classification model FISC based on Gram transformation. The FISC model is based on convolutional neural network technology, which extracts high-level content features of the image and performs Gram transformation as style features and inputs to the classifier for classification and recognition. At present, there are few public image style data sets. In this study, we build a data set of furniture image style attribute tags for the objectivity and pertinence of the experiment. The model has been fully experimentally compared, and the accuracy of the final training set and test set are 99.23% and 94% respectively, which fully verifies the superior performance of the FISC model on the task of furniture image style classification.
{"title":"FISC: Furniture image style classification model based on Gram transformation","authors":"Xin Du","doi":"10.1145/3503047.3503071","DOIUrl":"https://doi.org/10.1145/3503047.3503071","url":null,"abstract":"With the development of e-commerce, the types of commodities are becoming more diversified. Classification of commodities based on aesthetic attributes such as style is an important supplement to traditional classification techniques. Aiming at the problems of an unclear definition of furniture image style features, difficulty in extraction, and poor classification effect of general models, we design a furniture image classification model FISC based on Gram transformation. The FISC model is based on convolutional neural network technology, which extracts high-level content features of the image and performs Gram transformation as style features and inputs to the classifier for classification and recognition. At present, there are few public image style data sets. In this study, we build a data set of furniture image style attribute tags for the objectivity and pertinence of the experiment. The model has been fully experimentally compared, and the accuracy of the final training set and test set are 99.23% and 94% respectively, which fully verifies the superior performance of the FISC model on the task of furniture image style classification.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127170966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}