Pub Date : 2023-07-01DOI: 10.53106/160792642023072404005
Bao-Wei Zhang Bao-Wei Zhang, Lin Xu Bao-Wei Zhang, Yong-Hua Wang Lin Xu
In recent decades, the neural network approach to predicting yarn quality indicators has been recognized for its high accuracy. Although using neural networks to predict yarn quality indicators has a high accuracy advantage, its relationship understanding between each input parameter and yarn quality indicators may need to be corrected, i.e., increasing the raw cotton strength, the final yarn strength remains the same or decreases. Although this is normal for prediction algorithms, actual production need is more of a trend for individual parameter changes to predict a correct yarn, i.e., raw cotton strength increase should correspond to yarn strength increase. This study proposes a yarn quality prediction method based on actual production by combining nearest neighbor, particle swarm optimization, and expert experience to address the problem. We Use expert experience to determine the upper and lower limits of parameter weights, the particle swarm optimization finds the optimal weights, and then the nearest neighbor algorithm is used to calculate the predicted values of yarn indexes. Finally, the current problems and the rationality of the method proposed in this paper are verified by experiments.
{"title":"Prediction of Yarn Quality Based on Actual Production","authors":"Bao-Wei Zhang Bao-Wei Zhang, Lin Xu Bao-Wei Zhang, Yong-Hua Wang Lin Xu","doi":"10.53106/160792642023072404005","DOIUrl":"https://doi.org/10.53106/160792642023072404005","url":null,"abstract":"\u0000 In recent decades, the neural network approach to predicting yarn quality indicators has been recognized for its high accuracy. Although using neural networks to predict yarn quality indicators has a high accuracy advantage, its relationship understanding between each input parameter and yarn quality indicators may need to be corrected, i.e., increasing the raw cotton strength, the final yarn strength remains the same or decreases. Although this is normal for prediction algorithms, actual production need is more of a trend for individual parameter changes to predict a correct yarn, i.e., raw cotton strength increase should correspond to yarn strength increase. This study proposes a yarn quality prediction method based on actual production by combining nearest neighbor, particle swarm optimization, and expert experience to address the problem. We Use expert experience to determine the upper and lower limits of parameter weights, the particle swarm optimization finds the optimal weights, and then the nearest neighbor algorithm is used to calculate the predicted values of yarn indexes. Finally, the current problems and the rationality of the method proposed in this paper are verified by experiments.\u0000 \u0000","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134261884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.53106/160792642023072404002
Xiaopeng Wang Xiaopeng Wang, Shu-Chuan Chu Xiaopeng Wang, Václav Snášel Shu-Chuan Chu, Hisham A. Shehadeh Václav Snášel, Jeng-Shyang Pan Hisham A. Shehadeh
A new meta-heuristic algorithm named the five phases algorithm (FPA) is presented in this paper. The proposed method is inspired by the five phases theory in traditional Chinese thought. FPA updates agents based on the generating and overcoming strategy as well as learning strategy from the agent with the same label. FPA has a simple structure but excellent performance. It also does not have any predefined control parameters, only two general parameters including population size and terminal condition are required. This provides flexibility to users to solve different optimization problems. For global optimization, 10 test functions from the CEC2019 test suite are used to evaluate the performance of FPA. The experimental results confirm that FPA is better than the 6 state-of-the-art algorithms including particle swarm optimization (PSO), grey wolf optimizer (GWO), multi-verse optimizer (MVO), differential evolution (DE), backtracking search algorithm (BSA), and slime mould algorithm (SMA). Furthermore, FPA is applied to solve the Economic Load Dispatch (ELD) from the real power system problem. The experiments give that the minimum cost of power system operation obtained by the proposed FPA is more competitive than the 14 counterparts. The source codes of this algorithm can be found in https://ww2.mathworks.cn/matlabcentral/fileexchange/118215-five-phases-algorithm-fpa.
{"title":"Five Phases Algorithm: A Novel Meta-heuristic Algorithm and Its Application on Economic Load Dispatch Problem","authors":"Xiaopeng Wang Xiaopeng Wang, Shu-Chuan Chu Xiaopeng Wang, Václav Snášel Shu-Chuan Chu, Hisham A. Shehadeh Václav Snášel, Jeng-Shyang Pan Hisham A. Shehadeh","doi":"10.53106/160792642023072404002","DOIUrl":"https://doi.org/10.53106/160792642023072404002","url":null,"abstract":"\u0000 A new meta-heuristic algorithm named the five phases algorithm (FPA) is presented in this paper. The proposed method is inspired by the five phases theory in traditional Chinese thought. FPA updates agents based on the generating and overcoming strategy as well as learning strategy from the agent with the same label. FPA has a simple structure but excellent performance. It also does not have any predefined control parameters, only two general parameters including population size and terminal condition are required. This provides flexibility to users to solve different optimization problems. For global optimization, 10 test functions from the CEC2019 test suite are used to evaluate the performance of FPA. The experimental results confirm that FPA is better than the 6 state-of-the-art algorithms including particle swarm optimization (PSO), grey wolf optimizer (GWO), multi-verse optimizer (MVO), differential evolution (DE), backtracking search algorithm (BSA), and slime mould algorithm (SMA). Furthermore, FPA is applied to solve the Economic Load Dispatch (ELD) from the real power system problem. The experiments give that the minimum cost of power system operation obtained by the proposed FPA is more competitive than the 14 counterparts. The source codes of this algorithm can be found in https://ww2.mathworks.cn/matlabcentral/fileexchange/118215-five-phases-algorithm-fpa.\u0000 \u0000","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123939073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.53106/160792642023072404010
Wei Wei Wei Wei, Wei Liu Wei Wei, Beibei Zhang Wei Liu, Rafał Scherer Beibei Zhang, Robertas Damaševičius Rafal Scherer
New words detection, as basic research in natural language processing, has gained extensive concern from academic and business communities. When the existing Chinese word segmentation technology is applied in the specific field of tax-related finance, because it cannot correctly identify new words in the field, it will have an impact on subsequent information extraction and entity recognition. Aiming at the current problems in new word discovery, it proposed a new word detection method using statistical features that are based on the inner measurement and branch entropy and then combined with word vector representation. First, perform word segmentation preprocessing on the corpus, calculate the internal cohesion degree of words through statistics of scattered string mutual information, filter out candidate two-tuples, and then filter and expand the two-tuples; next, it locks the boundaries of new words through calculate the branch entropy. Finally, expand the new vocabulary dictionary according to the cosine similarity principle of word vector representation. The unsupervised neologism discovery proposed in this paper allows for automatic growth of the neologism lexicon, experimental results on large-scale corpus verify the effectiveness of this method.
{"title":"Discovery of New Words in Tax-related Fields Based on Word Vector Representation","authors":"Wei Wei Wei Wei, Wei Liu Wei Wei, Beibei Zhang Wei Liu, Rafał Scherer Beibei Zhang, Robertas Damaševičius Rafal Scherer","doi":"10.53106/160792642023072404010","DOIUrl":"https://doi.org/10.53106/160792642023072404010","url":null,"abstract":"\u0000 New words detection, as basic research in natural language processing, has gained extensive concern from academic and business communities. When the existing Chinese word segmentation technology is applied in the specific field of tax-related finance, because it cannot correctly identify new words in the field, it will have an impact on subsequent information extraction and entity recognition. Aiming at the current problems in new word discovery, it proposed a new word detection method using statistical features that are based on the inner measurement and branch entropy and then combined with word vector representation. First, perform word segmentation preprocessing on the corpus, calculate the internal cohesion degree of words through statistics of scattered string mutual information, filter out candidate two-tuples, and then filter and expand the two-tuples; next, it locks the boundaries of new words through calculate the branch entropy. Finally, expand the new vocabulary dictionary according to the cosine similarity principle of word vector representation. The unsupervised neologism discovery proposed in this paper allows for automatic growth of the neologism lexicon, experimental results on large-scale corpus verify the effectiveness of this method.\u0000 \u0000","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131061881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cold-standby systems have been widely used for conditions with limited power, which achieve fault tolerance and high-reliability systems. The cold spare (CSP) gate is a common dynamic gate in the dynamic fault tree (DFT). DFT with CSP gates is typically used to model a cold-standby system for reliability analysis. In general, inputs of the CSP gate are considered to be basic events. However, with the requirement of the current system design, the inputs of the CSP gate may be either basic events or top events of subtrees. Hence, the sequence-dependency among basic events in CSP gates becomes much more complex. However, the early conditional binary decision diagram (CBDD) used for the reliability analysis of spare gates does not consider it well. To address this problem, the conditioning event rep is improved to describe the replacement behavior in CSP gates with subtrees inputs, and the related formulae are derived. Further, a combinatorial method based on the CBDD is demonstrated to evaluate the reliability of cold-standby systems modeled by CSP gates with subtrees inputs. The case study is presented to show the advantage of using our method.
{"title":"Reliability Analysis of Cold-standby Systems with Subsystems Using Conditional Binary Decision Diagrams","authors":"Siwei Zhou Siwei Zhou, Yinghuai Yu Siwei Zhou, Xiaohong Peng Yinghuai Yu","doi":"10.53106/160792642023072404011","DOIUrl":"https://doi.org/10.53106/160792642023072404011","url":null,"abstract":"\u0000 Cold-standby systems have been widely used for conditions with limited power, which achieve fault tolerance and high-reliability systems. The cold spare (CSP) gate is a common dynamic gate in the dynamic fault tree (DFT). DFT with CSP gates is typically used to model a cold-standby system for reliability analysis. In general, inputs of the CSP gate are considered to be basic events. However, with the requirement of the current system design, the inputs of the CSP gate may be either basic events or top events of subtrees. Hence, the sequence-dependency among basic events in CSP gates becomes much more complex. However, the early conditional binary decision diagram (CBDD) used for the reliability analysis of spare gates does not consider it well. To address this problem, the conditioning event rep is improved to describe the replacement behavior in CSP gates with subtrees inputs, and the related formulae are derived. Further, a combinatorial method based on the CBDD is demonstrated to evaluate the reliability of cold-standby systems modeled by CSP gates with subtrees inputs. The case study is presented to show the advantage of using our method.\u0000 \u0000","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129694423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.53106/160792642023072404004
Guojun Wang Guojun Wang, Huijie Yang Guojun Wang
With the rapid development of edge computing, artificial intelligence and other technologies, intelligent transportation services in the vehicular ad hoc networks (VANETs) such as in-vehicle navigation and distress alert are increasingly being widely used in life. Currently, road navigation is an essential service in the vehicle network. However, when a user employs the road navigation service, his private data maybe exposed to roadside nodes. Meanwhile, when the trusted authorization sends the navigation route data to the user, the user can obtain all the road data. Especially, other unrequested data might be related to the military. Therefore, how to achieve secure and efficient road navigation while protecting privacy is a crucial issue. In this paper, we propose a privacy-preserving path selection protocol that supports a token as the object in the oblivious transfers, which effectively reduces the communication overhead. In addition, a lightweight dual authentication and group key negotiation protocol is provided to support dynamic joining or leaving of group members. Moreover, it can guarantee the security of forward data. After experimental analysis, the proposed protocol has high security and efficiency.
{"title":"A Lightweight Privacy-preserving Path Selection Scheme in VANETs","authors":"Guojun Wang Guojun Wang, Huijie Yang Guojun Wang","doi":"10.53106/160792642023072404004","DOIUrl":"https://doi.org/10.53106/160792642023072404004","url":null,"abstract":"\u0000 With the rapid development of edge computing, artificial intelligence and other technologies, intelligent transportation services in the vehicular ad hoc networks (VANETs) such as in-vehicle navigation and distress alert are increasingly being widely used in life. Currently, road navigation is an essential service in the vehicle network. However, when a user employs the road navigation service, his private data maybe exposed to roadside nodes. Meanwhile, when the trusted authorization sends the navigation route data to the user, the user can obtain all the road data. Especially, other unrequested data might be related to the military. Therefore, how to achieve secure and efficient road navigation while protecting privacy is a crucial issue. In this paper, we propose a privacy-preserving path selection protocol that supports a token as the object in the oblivious transfers, which effectively reduces the communication overhead. In addition, a lightweight dual authentication and group key negotiation protocol is provided to support dynamic joining or leaving of group members. Moreover, it can guarantee the security of forward data. After experimental analysis, the proposed protocol has high security and efficiency.\u0000 \u0000","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"98 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129258139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.53106/160792642023072404006
Sun-Woo Yun Sun-Woo Yun, Eun-Young Lee Sun-Woo Yun, Il-Gu Lee Eun-Young Lee
With the gradual development of Fourth Industrial Revolution technologies, such as artificial intelligence, the Internet of Things, and big data, and the considerable amount of data in mobile networks, low-latency communication and security management are becoming crucial. Blockchain is a data-distributed processing technology that tracks data records to support secure electronic money transactions and data security management in a peer-to-peer environment without the need of a central trusted authority. The data uploaded to the blockchain-shared ledger are immutable, making tracking integrity preservation facile. However, blockchain technology is limited because it is challenging to utilize in the industry owing to its inability to correct data, even when inaccurate data are uploaded. Accordingly, research on blockchain mechanisms that consider privacy-preserving data management is required to commercialize blockchain technology. Previously, off-chain, blacklist, and hard-fork methods have been proposed; however, their application is challenging or impractical. Therefore, to protect privacy, we propose a layered blockchain mechanism that can correct data by adding a buffer blockchain. We evaluated the latency, security, and space complexity of layered blockchains. The security and security-to-latency ratio for data management of the selective layered blockchain is 2.2 and 11.3 times higher than the conventional blockchains, respectively. The proposed selective layered blockchain is expected to promote the commercialization of blockchain technologies in various industries by protecting user privacy.
{"title":"Selective Layered Blockchain Framework for Privacy-preserving Data Management in Low-latency Mobile Networks","authors":"Sun-Woo Yun Sun-Woo Yun, Eun-Young Lee Sun-Woo Yun, Il-Gu Lee Eun-Young Lee","doi":"10.53106/160792642023072404006","DOIUrl":"https://doi.org/10.53106/160792642023072404006","url":null,"abstract":"\u0000 With the gradual development of Fourth Industrial Revolution technologies, such as artificial intelligence, the Internet of Things, and big data, and the considerable amount of data in mobile networks, low-latency communication and security management are becoming crucial. Blockchain is a data-distributed processing technology that tracks data records to support secure electronic money transactions and data security management in a peer-to-peer environment without the need of a central trusted authority. The data uploaded to the blockchain-shared ledger are immutable, making tracking integrity preservation facile. However, blockchain technology is limited because it is challenging to utilize in the industry owing to its inability to correct data, even when inaccurate data are uploaded. Accordingly, research on blockchain mechanisms that consider privacy-preserving data management is required to commercialize blockchain technology. Previously, off-chain, blacklist, and hard-fork methods have been proposed; however, their application is challenging or impractical. Therefore, to protect privacy, we propose a layered blockchain mechanism that can correct data by adding a buffer blockchain. We evaluated the latency, security, and space complexity of layered blockchains. The security and security-to-latency ratio for data management of the selective layered blockchain is 2.2 and 11.3 times higher than the conventional blockchains, respectively. The proposed selective layered blockchain is expected to promote the commercialization of blockchain technologies in various industries by protecting user privacy.\u0000 \u0000","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133877097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.53106/160792642023072404018
Azragul Yusup Azragul Yusup, Degang Chen Azragul Yusup, Yifei Ge Degang Chen, Hongliang Mao Yifei Ge, Nujian Wang Hongliang Mao
To address the problem of scarce low-resource sentiment analysis corpus nowadays, this paper proposes a sentence-level sentiment analysis resource conversion method HTL based on the syntactic-semantic knowledge of the low-resource language Uyghur to convert high-resource corpus to low-resource corpus. In the conversion process, a k-fold cross-filtering method is proposed to reduce the distortion of data samples, which is used to select high-quality samples for conversion; finally, the Uyghur sentiment analysis dataset USD is constructed; the Baseline of this dataset is verified under the LSTM model, and the accuracy and F1 values reach 81.07% and 81.13%, respectively, which can provide a reference for the construction of low-resource language corpus nowadays. The accuracy and F1 values reached 81.07% and 81.13%, respectively, which can provide a reference for the construction of today’s low-resource corpus. Meanwhile, this paper also proposes a sentiment analysis model based on logistic regression ensemble learning, SA-LREL, which combines the advantages of several lightweight network models such as TextCNN, RNN, and RCNN as the base model, and the meta-model is constructed using logistic regression functions for ensemble, and the accuracy and F1 values reach 82.17% and 81.86% respectively in the test set, and the experimental results show that the method can effectively improve the performance of Uyghur sentiment analysis task.
{"title":"Resource Construction and Ensemble Learning based Sentiment Analysis for the Low-resource Language Uyghur","authors":"Azragul Yusup Azragul Yusup, Degang Chen Azragul Yusup, Yifei Ge Degang Chen, Hongliang Mao Yifei Ge, Nujian Wang Hongliang Mao","doi":"10.53106/160792642023072404018","DOIUrl":"https://doi.org/10.53106/160792642023072404018","url":null,"abstract":"\u0000 To address the problem of scarce low-resource sentiment analysis corpus nowadays, this paper proposes a sentence-level sentiment analysis resource conversion method HTL based on the syntactic-semantic knowledge of the low-resource language Uyghur to convert high-resource corpus to low-resource corpus. In the conversion process, a k-fold cross-filtering method is proposed to reduce the distortion of data samples, which is used to select high-quality samples for conversion; finally, the Uyghur sentiment analysis dataset USD is constructed; the Baseline of this dataset is verified under the LSTM model, and the accuracy and F1 values reach 81.07% and 81.13%, respectively, which can provide a reference for the construction of low-resource language corpus nowadays. The accuracy and F1 values reached 81.07% and 81.13%, respectively, which can provide a reference for the construction of today’s low-resource corpus. Meanwhile, this paper also proposes a sentiment analysis model based on logistic regression ensemble learning, SA-LREL, which combines the advantages of several lightweight network models such as TextCNN, RNN, and RCNN as the base model, and the meta-model is constructed using logistic regression functions for ensemble, and the accuracy and F1 values reach 82.17% and 81.86% respectively in the test set, and the experimental results show that the method can effectively improve the performance of Uyghur sentiment analysis task.\u0000 \u0000","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128695911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enhancing the coverage area of the sensing range with the limiting resource is a critical problem in the wireless sensor network (WSN). Mobile sensors are patched coverage holes and they also have limited energy to move in large distances. Several recent studies indicated the metaheuristic algorithms can find an acceptable deployed solution in a reasonable time, especially the PSO-based algorithm. However, the speeds of convergence of most PSO-based algorithms are too fast which will lead to the premature problem to degrade the quality of deployed performance in WSN. A hybrid metaheuristic combined with dynamic multi-swarm particle swarm optimization and firefly algorithm will be presented in this paper to find an acceptable deployed solution with the maximum coverage rate and minimum energy consumption via static and mobile sensors. Moreover, a novel switch search mechanism between sub-swarms will also be presented for the proposed algorithm to avoid fall into local optimal in early convergence process. The simulation results show that the proposed method can obtain better solutions than other PSO-based deployment algorithms compared in this paper in terms of coverage rate and energy consumption.
{"title":"A Hybrid Firefly with Dynamic Multi-swarm Particle Swarm Optimization for WSN Deployment","authors":"Wei-Yan Chang Wei-Yan Chang, Prathibha Soma Wei-Yan Chang, Huan Chen Prathibha Soma, Hsuan Chang Huan Chen, Chun-Wei Tsai Hsuan Chang","doi":"10.53106/160792642023072404001","DOIUrl":"https://doi.org/10.53106/160792642023072404001","url":null,"abstract":"\u0000 Enhancing the coverage area of the sensing range with the limiting resource is a critical problem in the wireless sensor network (WSN). Mobile sensors are patched coverage holes and they also have limited energy to move in large distances. Several recent studies indicated the metaheuristic algorithms can find an acceptable deployed solution in a reasonable time, especially the PSO-based algorithm. However, the speeds of convergence of most PSO-based algorithms are too fast which will lead to the premature problem to degrade the quality of deployed performance in WSN. A hybrid metaheuristic combined with dynamic multi-swarm particle swarm optimization and firefly algorithm will be presented in this paper to find an acceptable deployed solution with the maximum coverage rate and minimum energy consumption via static and mobile sensors. Moreover, a novel switch search mechanism between sub-swarms will also be presented for the proposed algorithm to avoid fall into local optimal in early convergence process. The simulation results show that the proposed method can obtain better solutions than other PSO-based deployment algorithms compared in this paper in terms of coverage rate and energy consumption.\u0000 \u0000","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"412 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122789269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.53106/160792642023072404016
Ganyi Tang Ganyi Tang, Lili Fan Ganyi Tang, Jianguo Shi Lili Fan, Jingjing Tan Jianguo Shi, Guifu Lu Jingjing Tan
Recently, the robust PCA/2DPCA methods have achieved great success in subspace learning. Nevertheless, most of them have a basic premise that the average of samples is zero and the optimal mean is the center of the data. Actually, this premise only applies to PCA/2DPCA methods based on L2-norm. The robust PCA/2DPCA method with L1-norm has an optimal mean deviate from zero, and the estimation of the optimal mean leads to an expensive calculation. Another shortcoming of PCA/2DPCA is that it does not pay enough attention to the instinct correlation within the part of data. To tackle these issues, we introduce the maximum variance of samples’ difference into Block principal component analysis (BPCA) and propose a robust method for avoiding the optimal mean to extract orthonormal features. BPCA, which is generalized from PCA and 2DPCA, is a general PCA/2DPCA framework specialized in part learning, can makes better use of the partial correlation. However, projection features without sparsity not only have higher computational complexity, but also lack semantic properties. We integrate the elastic network into avoiding optimal mean robust BPCA to perform sparse constraints on projection features. These two BPCA methods (non-sparse and sparse) make the presumption of zero-mean data unnecessary and avoid optimal mean calculation. Experiments on reference benchmark databases indicate the usefulness of the proposed two methods in image classification and image reconstruction.
{"title":"Avoiding Optimal Mean Robust and Sparse BPCA with L1-norm Maximization","authors":"Ganyi Tang Ganyi Tang, Lili Fan Ganyi Tang, Jianguo Shi Lili Fan, Jingjing Tan Jianguo Shi, Guifu Lu Jingjing Tan","doi":"10.53106/160792642023072404016","DOIUrl":"https://doi.org/10.53106/160792642023072404016","url":null,"abstract":"\u0000 Recently, the robust PCA/2DPCA methods have achieved great success in subspace learning. Nevertheless, most of them have a basic premise that the average of samples is zero and the optimal mean is the center of the data. Actually, this premise only applies to PCA/2DPCA methods based on L2-norm. The robust PCA/2DPCA method with L1-norm has an optimal mean deviate from zero, and the estimation of the optimal mean leads to an expensive calculation. Another shortcoming of PCA/2DPCA is that it does not pay enough attention to the instinct correlation within the part of data. To tackle these issues, we introduce the maximum variance of samples’ difference into Block principal component analysis (BPCA) and propose a robust method for avoiding the optimal mean to extract orthonormal features. BPCA, which is generalized from PCA and 2DPCA, is a general PCA/2DPCA framework specialized in part learning, can makes better use of the partial correlation. However, projection features without sparsity not only have higher computational complexity, but also lack semantic properties. We integrate the elastic network into avoiding optimal mean robust BPCA to perform sparse constraints on projection features. These two BPCA methods (non-sparse and sparse) make the presumption of zero-mean data unnecessary and avoid optimal mean calculation. Experiments on reference benchmark databases indicate the usefulness of the proposed two methods in image classification and image reconstruction.\u0000 \u0000","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129884625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.53106/160792642023072404003
Yi-Ting Huang Yi-Ting Huang, Wan-Hui Lee Yi-Ting Huang, Jen-Hui Tsai Wan-Hui Lee
Many contemporary multiple criteria decision-making (MCDM) problems are rather complicated and uncertain to manage. MCDM problems can be complex because they involve making decisions based on multiple conflicting criteria, and they can be uncertain because they often involve incomplete or subjective information. This can make it difficult to determine the optimal solution to the problem. Over the last decades, tens of thousands MCDM methods have been proposed based on fuzzy sets (FSs) and intuitionistic fuzzy sets (IFSs). In this paper, we propose a new MCDM method based on Fermatean fuzzy sets (FFSs) and improved Dice similarity measure (DSM) and generalized Dice similarity measures (GDSM) between two FFSs with completely unknown weights of criteria. When a decision matrix is given, we calculate the weights of criteria using a normalized entropy measure while the weights of criteria are not given by the decision-maker. Then, we use the proposed improved DSM and GDSM between two FFSs that take the hesitancy degree of elements of FFSs into account and develop a new MCDM method. Finally, we use the values of the proposed improved DSM and GDSM between two FFSs to get the preference order of the alternatives. The proposed method can overcome the drawbacks and limitations of some existing methods that they cannot get the preference order of the alternatives under Fermatean fuzzy (FF) environments.
{"title":"A New Approach to Multiple Criteria Decision-Making Using the Dice Similarity Measure under Fermatean Fuzzy Environments","authors":"Yi-Ting Huang Yi-Ting Huang, Wan-Hui Lee Yi-Ting Huang, Jen-Hui Tsai Wan-Hui Lee","doi":"10.53106/160792642023072404003","DOIUrl":"https://doi.org/10.53106/160792642023072404003","url":null,"abstract":"\u0000 Many contemporary multiple criteria decision-making (MCDM) problems are rather complicated and uncertain to manage. MCDM problems can be complex because they involve making decisions based on multiple conflicting criteria, and they can be uncertain because they often involve incomplete or subjective information. This can make it difficult to determine the optimal solution to the problem. Over the last decades, tens of thousands MCDM methods have been proposed based on fuzzy sets (FSs) and intuitionistic fuzzy sets (IFSs). In this paper, we propose a new MCDM method based on Fermatean fuzzy sets (FFSs) and improved Dice similarity measure (DSM) and generalized Dice similarity measures (GDSM) between two FFSs with completely unknown weights of criteria. When a decision matrix is given, we calculate the weights of criteria using a normalized entropy measure while the weights of criteria are not given by the decision-maker. Then, we use the proposed improved DSM and GDSM between two FFSs that take the hesitancy degree of elements of FFSs into account and develop a new MCDM method. Finally, we use the values of the proposed improved DSM and GDSM between two FFSs to get the preference order of the alternatives. The proposed method can overcome the drawbacks and limitations of some existing methods that they cannot get the preference order of the alternatives under Fermatean fuzzy (FF) environments.\u0000 \u0000","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131017988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}