Pub Date : 2021-09-01DOI: 10.53106/160792642021092205014
Jiaze Sun, Nan Han, Jianbin Huang, Jiahui Deng, Yang Geng
In many metropolitans, especially during rush hours on holidays, thousands of riders will initiate travel orders at the same time, and the existing carpool matching model cannot handle largescale travel orders quickly enough. For handling this problem, a fast and efficient multi-objective carpool matching algorithm (MOCMA) is put forward, which generates a set of different matching schemes suitable for different practical scenarios. First, the idea of partition is adopted to gather riders and drivers with similar journeys, and the relationship matrix construction algorithm (RMCA) is proposed; then from the perspective of riders and drivers, the maximum service quality and the maximum shared mileage are two objectives, and a set of non-dominated solution sets are generated using MOCMA; finally, the simulation experiment results show that MOCMA proposed is suitable for different practical scenarios, the matching success rate is as high as 99.7%, and it has significant advantages over MOEA/D, SPEA2, and FastPGA.
{"title":"A Fast Response Multi-Objective Matching Algorithm for Ridesharing","authors":"Jiaze Sun, Nan Han, Jianbin Huang, Jiahui Deng, Yang Geng","doi":"10.53106/160792642021092205014","DOIUrl":"https://doi.org/10.53106/160792642021092205014","url":null,"abstract":"In many metropolitans, especially during rush hours on holidays, thousands of riders will initiate travel orders at the same time, and the existing carpool matching model cannot handle largescale travel orders quickly enough. For handling this problem, a fast and efficient multi-objective carpool matching algorithm (MOCMA) is put forward, which generates a set of different matching schemes suitable for different practical scenarios. First, the idea of partition is adopted to gather riders and drivers with similar journeys, and the relationship matrix construction algorithm (RMCA) is proposed; then from the perspective of riders and drivers, the maximum service quality and the maximum shared mileage are two objectives, and a set of non-dominated solution sets are generated using MOCMA; finally, the simulation experiment results show that MOCMA proposed is suitable for different practical scenarios, the matching success rate is as high as 99.7%, and it has significant advantages over MOEA/D, SPEA2, and FastPGA.","PeriodicalId":50172,"journal":{"name":"Journal of Internet Technology","volume":"22 1","pages":"1107-1116"},"PeriodicalIF":1.6,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44503860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.53106/160792642021092205018
Masayuki Fukumitsu, Shingo Hasegawa
Information security is a multidisciplinary area that addresses the development and implementation of security mechanisms in order to protect information systems with specific purposes against potential attacks or threats. The security goal can be defined for each type of attacks. The currently relevant set of security goals includes confidentiality, integrity, availability, privacy, authenticity and trustworthiness, non-repudiation, accountability and auditability. However, with the rapid global penetration of network, different models are considered to design new solutions to realizing information security (e.g., in IoT or distributed scenarios). This attracts lots of attention to work on information security research on modern architecture of information systems. Very recently, the AI techniques have joined this area and also acted as a double-edged sword in realizing attacks and defenses. The main purpose of this special issue is to publish selected papers with high-quality from “15th Asia Joint Conference on Information Security (AsiaJCIS 2020).” In this special issue, we focus mainly on cryptography, network security, system security, and application security. We are interested in the novel ideas, advanced techniques, comparative analysis of different methodologies, detailed surveys, and technical reviews on all aspects of cooperative communications and mechanisms in information security. This special issue also covers industrial applications and academic research contributions, and totally includes three papers that are the extended version from their conference papers.
{"title":"Linear and Lossy Identification Schemes Derive Tightly Secure Multisignatures","authors":"Masayuki Fukumitsu, Shingo Hasegawa","doi":"10.53106/160792642021092205018","DOIUrl":"https://doi.org/10.53106/160792642021092205018","url":null,"abstract":"Information security is a multidisciplinary area that addresses the development and implementation of security mechanisms in order to protect information systems with specific purposes against potential attacks or threats. The security goal can be defined for each type of attacks. The currently relevant set of security goals includes confidentiality, integrity, availability, privacy, authenticity and trustworthiness, non-repudiation, accountability and auditability. However, with the rapid global penetration of network, different models are considered to design new solutions to realizing information security (e.g., in IoT or distributed scenarios). This attracts lots of attention to work on information security research on modern architecture of information systems. Very recently, the AI techniques have joined this area and also acted as a double-edged sword in realizing attacks and defenses. The main purpose of this special issue is to publish selected papers with high-quality from “15th Asia Joint Conference on Information Security (AsiaJCIS 2020).” In this special issue, we focus mainly on cryptography, network security, system security, and application security. We are interested in the novel ideas, advanced techniques, comparative analysis of different methodologies, detailed surveys, and technical reviews on all aspects of cooperative communications and mechanisms in information security. This special issue also covers industrial applications and academic research contributions, and totally includes three papers that are the extended version from their conference papers.","PeriodicalId":50172,"journal":{"name":"Journal of Internet Technology","volume":"22 1","pages":"1157-1168"},"PeriodicalIF":1.6,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45889676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.53106/160792642021092205010
Lingxia Liao, Zhi Li, Han-Chieh Chao
Wide Area Networks (WANs) form the network core that covers wide geographical areas. WANs often have complex topologies, and it is challenging to incorporate multiple controllers in the control plane to reduce the network delay in Wide Area Software Defined Networks (WASDNs). We propose a distributed controller placement problem (DCPP) for various control plane structures to address this challenge. While existing exhaustive and greedy algorithms cannot efficiently solve the DCPP over many large-scaled WASDNs, we propose a network simplification strategy based on a novel global network coefficient, polyindex, to identify all the nonoverlapped cliques in networks and characterize the topology features of such complex networks. With such strategy, the good number, organization, and placements of controllers for the DCPP over large-scaled WASDNs can be determined. Extensive evaluations demonstrate the effectiveness of the polyindex in capturing the features of sparse WANs. While applying the proposed strategy over large-scaled WANs with small and medium polyindexes can quickly find the placements for the DCPP while meeting the given delay requirement, carefully adjusting the delay requirement and threshold is the key to generate high quality frontiers while keeping the time cost low over the WANs with large scales and polyindexes.
{"title":"Placing Controllers over Complex Wide Area SDNs Based on Clique Identification","authors":"Lingxia Liao, Zhi Li, Han-Chieh Chao","doi":"10.53106/160792642021092205010","DOIUrl":"https://doi.org/10.53106/160792642021092205010","url":null,"abstract":"Wide Area Networks (WANs) form the network core that covers wide geographical areas. WANs often have complex topologies, and it is challenging to incorporate multiple controllers in the control plane to reduce the network delay in Wide Area Software Defined Networks (WASDNs). We propose a distributed controller placement problem (DCPP) for various control plane structures to address this challenge. While existing exhaustive and greedy algorithms cannot efficiently solve the DCPP over many large-scaled WASDNs, we propose a network simplification strategy based on a novel global network coefficient, polyindex, to identify all the nonoverlapped cliques in networks and characterize the topology features of such complex networks. With such strategy, the good number, organization, and placements of controllers for the DCPP over large-scaled WASDNs can be determined. Extensive evaluations demonstrate the effectiveness of the polyindex in capturing the features of sparse WANs. While applying the proposed strategy over large-scaled WANs with small and medium polyindexes can quickly find the placements for the DCPP while meeting the given delay requirement, carefully adjusting the delay requirement and threshold is the key to generate high quality frontiers while keeping the time cost low over the WANs with large scales and polyindexes.","PeriodicalId":50172,"journal":{"name":"Journal of Internet Technology","volume":"22 1","pages":"1053-1066"},"PeriodicalIF":1.6,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48151456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, with the rise of the internet of things technology, most devices can be connected to the internet, resulting in fewer and fewer IP addresses. Therefore, the number of applications using NAT has also increased. Coupled with the rise of fog computing architecture, NAT traversal has become increasingly difficult to be implemented. Although many methods of traversing firewalls have been proposed and widely used in various communication archiectures, Users cannot rely solely on the central server to establish connections in the peer-to-peer network, which increases the loading on the NAT architecture and transport server. In the past, there have been related papers to solve this problem, and the goal is to increase the success rate of establishing a connection through a NAT server. Therefore, a novel load balancing solution is proposed in this study, and the loading value obtained from the SVM model is adopted as the basis for selecting the network address conversion server. At the end of the study, we not only discuss the maximum server loading, different analysis models, and the delay time added by the new processes in the architecture, but also find the proposed approach is able to achieve the loading balance with only a small increase in delay time.
{"title":"A Novel NAT-based Approach for Resource Load Balancing in Fog Computing Architecture","authors":"Chin-Feng Lai, Hung-Yen Weng, Hao Yu Chou, Yueh-Min Huang","doi":"10.3966/160792642021052203002","DOIUrl":"https://doi.org/10.3966/160792642021052203002","url":null,"abstract":"In recent years, with the rise of the internet of things technology, most devices can be connected to the internet, resulting in fewer and fewer IP addresses. Therefore, the number of applications using NAT has also increased. Coupled with the rise of fog computing architecture, NAT traversal has become increasingly difficult to be implemented. Although many methods of traversing firewalls have been proposed and widely used in various communication archiectures, Users cannot rely solely on the central server to establish connections in the peer-to-peer network, which increases the loading on the NAT architecture and transport server. In the past, there have been related papers to solve this problem, and the goal is to increase the success rate of establishing a connection through a NAT server. Therefore, a novel load balancing solution is proposed in this study, and the loading value obtained from the SVM model is adopted as the basis for selecting the network address conversion server. At the end of the study, we not only discuss the maximum server loading, different analysis models, and the delay time added by the new processes in the architecture, but also find the proposed approach is able to achieve the loading balance with only a small increase in delay time.","PeriodicalId":50172,"journal":{"name":"Journal of Internet Technology","volume":"22 1","pages":"513-520"},"PeriodicalIF":1.6,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43803519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.3966/160792642021052203001
Fang Fan, S. Chu, Jeng-Shyang Pan, Qing-yong Yang, Huiqi Zhao
All along, people have a high enthusiasm for the research of optimization algorithm. A large number of new algorithms and methods have emerged. The sine cosine algorithm (SCA) is an excellent algorithm that has appeared in recent years. It is a stochastic optimization algorithm based on population. Compared with the existing algorithms, SCA is a suitable solution to different optimization problems, especially the optimization of unimodal functions. It is qualified to optimize real-world problems with unknown and limited search space. But sometimes it does not perform satisfactorily when dealing with some specific problems, such as optimization of multimodal functions or composite functions. This paper presents a parallel version of the sine cosine algorithm (PSCA) with three communication strategies. Different strategies can be selected according to the type of optimization function to achieve better results. We have repeatedly tested different types of functions, and the results show that the proposed PSCA can solve the optimization problem more specifically. In the simulation of wireless sensor network (WSN) dynamic deployment optimization, it is found that using this method can get the ideal sensor node distribution, which makes PSCA’s performance in solving other practical problems worth looking forward to.
{"title":"Parallel sine cosine algorithm for the dynamic deployment in wireless sensor networks","authors":"Fang Fan, S. Chu, Jeng-Shyang Pan, Qing-yong Yang, Huiqi Zhao","doi":"10.3966/160792642021052203001","DOIUrl":"https://doi.org/10.3966/160792642021052203001","url":null,"abstract":"All along, people have a high enthusiasm for the research of optimization algorithm. A large number of new algorithms and methods have emerged. The sine cosine algorithm (SCA) is an excellent algorithm that has appeared in recent years. It is a stochastic optimization algorithm based on population. Compared with the existing algorithms, SCA is a suitable solution to different optimization problems, especially the optimization of unimodal functions. It is qualified to optimize real-world problems with unknown and limited search space. But sometimes it does not perform satisfactorily when dealing with some specific problems, such as optimization of multimodal functions or composite functions. This paper presents a parallel version of the sine cosine algorithm (PSCA) with three communication strategies. Different strategies can be selected according to the type of optimization function to achieve better results. We have repeatedly tested different types of functions, and the results show that the proposed PSCA can solve the optimization problem more specifically. In the simulation of wireless sensor network (WSN) dynamic deployment optimization, it is found that using this method can get the ideal sensor node distribution, which makes PSCA’s performance in solving other practical problems worth looking forward to.","PeriodicalId":50172,"journal":{"name":"Journal of Internet Technology","volume":"22 1","pages":"499-512"},"PeriodicalIF":1.6,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47074662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the reinforcement learning model training, it usually takes a lot of training data and computing time to find the law from the environmental response in order to facilitate the convergence of the model. However, edge nodes usually do not have powerful computing capabilities, which makes it impossible to apply reinforcement learning models to edge computing nodes. Therefore, the framework proposed in this study can enable the reinforcement learning model to gradually converge to the parameters of the supervised learning model within the shorter computing time, so as to solve the problem of insufficient terminal device performance in edge computing. Among the experimental results, the operating differences of hardware with different performance and the influence of the network environment and neural network architecture are analyzed based on the Mnist and Mall data sets. The result shows that it is sufficient to load the real-time required by users under the framework of collaborative training, and the time delay pressure on the model is caused by the application of different levels of complexity.
{"title":"Collaborative Framework of Accelerating Reinforcement Learning Training with Supervised Learning Based on Edge Computing","authors":"Yu Shan Lin, Chin-Feng Lai, Chieh-Lin Chuang, Xiaohu Ge, H. Chao","doi":"10.3966/160792642021032202001","DOIUrl":"https://doi.org/10.3966/160792642021032202001","url":null,"abstract":"In the reinforcement learning model training, it usually takes a lot of training data and computing time to find the law from the environmental response in order to facilitate the convergence of the model. However, edge nodes usually do not have powerful computing capabilities, which makes it impossible to apply reinforcement learning models to edge computing nodes. Therefore, the framework proposed in this study can enable the reinforcement learning model to gradually converge to the parameters of the supervised learning model within the shorter computing time, so as to solve the problem of insufficient terminal device performance in edge computing. Among the experimental results, the operating differences of hardware with different performance and the influence of the network environment and neural network architecture are analyzed based on the Mnist and Mall data sets. The result shows that it is sufficient to load the real-time required by users under the framework of collaborative training, and the time delay pressure on the model is caused by the application of different levels of complexity.","PeriodicalId":50172,"journal":{"name":"Journal of Internet Technology","volume":"22 1","pages":"229-238"},"PeriodicalIF":1.6,"publicationDate":"2021-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43239320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-30DOI: 10.3966/160792642021032202022
Chih-Hung Chang, Tse-Chuan Hsu, W. Chu, Che-Lun Hung, P. Chiu
Chronic disease management is the most expensive, fastest growing and most difficult problem for medical care workers in various countries. Current Health care information systems do not have interoperability characteristics and lack of data model standards, which makes it very difficult to extract meaningful information for further analysis. Deep learning can help medical care giver analyze various features of collecting data of patients and possibly more accurately diagnose and improve medical treatment through early detection and prevention. Our approach uses P4 medical model, which is predictive, preventative, personalized and participatory, which identifies diseases at early stage of diseases development, therefore it helps patients improve their daily behavior and health status. In this paper, an effective and reliable intelligent service warehousing platform, which is a service framework and a middle layer, is designed to maintain the quality of service of the intelligent health care system and to analyze and design to predict the risk factors that contribute to diabetes and kidney disease. The mathematical prediction model is provided to doctors to support their patient’s treatment. At the end we verified the availability and effectiveness of this service platform from the data of hospital.
{"title":"A smart service warehousing platform supporting big data deep learning modeling analysis","authors":"Chih-Hung Chang, Tse-Chuan Hsu, W. Chu, Che-Lun Hung, P. Chiu","doi":"10.3966/160792642021032202022","DOIUrl":"https://doi.org/10.3966/160792642021032202022","url":null,"abstract":"Chronic disease management is the most expensive, fastest growing and most difficult problem for medical care workers in various countries. Current Health care information systems do not have interoperability characteristics and lack of data model standards, which makes it very difficult to extract meaningful information for further analysis. Deep learning can help medical care giver analyze various features of collecting data of patients and possibly more accurately diagnose and improve medical treatment through early detection and prevention. Our approach uses P4 medical model, which is predictive, preventative, personalized and participatory, which identifies diseases at early stage of diseases development, therefore it helps patients improve their daily behavior and health status. In this paper, an effective and reliable intelligent service warehousing platform, which is a service framework and a middle layer, is designed to maintain the quality of service of the intelligent health care system and to analyze and design to predict the risk factors that contribute to diabetes and kidney disease. The mathematical prediction model is provided to doctors to support their patient’s treatment. At the end we verified the availability and effectiveness of this service platform from the data of hospital.","PeriodicalId":50172,"journal":{"name":"Journal of Internet Technology","volume":"22 1","pages":"483-489"},"PeriodicalIF":1.6,"publicationDate":"2021-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43827228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-30DOI: 10.3966/160792642021032202003
Jeng-Shyang Pan, Jiawen Zhuang, Hao Luo, S. Chu
Multi-group Flower Pollination Algorithm (MFPA) based on novel communication strategies was proposed with an eye to the disadvantages of the Flower Pollination Algorithm (FPA), such as tardy convergence rate, inferior search accuracy, and strong local optimum. By introducing a parallel operation to divide the population into some groups, the global search capability of the algorithm was improved. Then three new communication strategies were proposed. Strategy 1 combined high-quality pollens of each group for evolution and replaced the old pollens. Strategy 2 let each group’s inferior pollens approaching to the optimal pollen. Strategy 3 was a combination of strategies 1 and 2. Then, experiments on 25 classical test functions show that MFPA based on novel communication strategies has a good global optimization ability, improving the convergence speed and accuracy of the FPA. Thus, we compare MFPA using three strategies with FPA and PSO, its result shows that MFPA is better than FPA and PSO. Finally, we also applied it to two practical problems and achieved a better convergence effect than FPA.
{"title":"Multi-group Flower Pollination Algorithm Based on Novel Communication Strategies","authors":"Jeng-Shyang Pan, Jiawen Zhuang, Hao Luo, S. Chu","doi":"10.3966/160792642021032202003","DOIUrl":"https://doi.org/10.3966/160792642021032202003","url":null,"abstract":"Multi-group Flower Pollination Algorithm (MFPA) based on novel communication strategies was proposed with an eye to the disadvantages of the Flower Pollination Algorithm (FPA), such as tardy convergence rate, inferior search accuracy, and strong local optimum. By introducing a parallel operation to divide the population into some groups, the global search capability of the algorithm was improved. Then three new communication strategies were proposed. Strategy 1 combined high-quality pollens of each group for evolution and replaced the old pollens. Strategy 2 let each group’s inferior pollens approaching to the optimal pollen. Strategy 3 was a combination of strategies 1 and 2. Then, experiments on 25 classical test functions show that MFPA based on novel communication strategies has a good global optimization ability, improving the convergence speed and accuracy of the FPA. Thus, we compare MFPA using three strategies with FPA and PSO, its result shows that MFPA is better than FPA and PSO. Finally, we also applied it to two practical problems and achieved a better convergence effect than FPA.","PeriodicalId":50172,"journal":{"name":"Journal of Internet Technology","volume":"22 1","pages":"257-269"},"PeriodicalIF":1.6,"publicationDate":"2021-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48400504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-30DOI: 10.3966/160792642021032202016
Wei Wei, Zengguo Sun, Zhihua Zhang, R. Scherer, R. Damaševičius
Fisher distribution is a popular model for high-resolution (HR) synthetic aperture radar (SAR) images due to its high-peaked and heavy-tailed characteristics as well as its theoretical justification and mathematical tractability. Based on the Fisher modeling of SAR images, the maximum a posteriori (MAP) filter is suggested. In the Fisher model, the parameter of image looks is thought to be fixed to correspond to the formation mechanism of multi-look intensity images, and the other two parameters are accur ately assessed from the SAR image based on second-kind statistics. To improve the Fisher MAP filter especially in the aspect of speckle suppression, the Fisher MAP filter based on recognition of structural information is created using point target detection, the adaptive windowing method, homogeneous region detection, and selection of most homogeneous sub-window. The experiments on despeckling of HR SAR images demonstrate that the improved Fisher MAP filter based on structural information detection can suppress speckle in homogenous and edge regions, and effectively preserve fine details, edges, and point targets.
{"title":"Improved Fisher MAP Filter for Despeckling of High-Resolution SAR Images Based on Structural Information Detection","authors":"Wei Wei, Zengguo Sun, Zhihua Zhang, R. Scherer, R. Damaševičius","doi":"10.3966/160792642021032202016","DOIUrl":"https://doi.org/10.3966/160792642021032202016","url":null,"abstract":"Fisher distribution is a popular model for high-resolution (HR) synthetic aperture radar (SAR) images due to its high-peaked and heavy-tailed characteristics as well as its theoretical justification and mathematical tractability. Based on the Fisher modeling of SAR images, the maximum a posteriori (MAP) filter is suggested. In the Fisher model, the parameter of image looks is thought to be fixed to correspond to the formation mechanism of multi-look intensity images, and the other two parameters are accur ately assessed from the SAR image based on second-kind statistics. To improve the Fisher MAP filter especially in the aspect of speckle suppression, the Fisher MAP filter based on recognition of structural information is created using point target detection, the adaptive windowing method, homogeneous region detection, and selection of most homogeneous sub-window. The experiments on despeckling of HR SAR images demonstrate that the improved Fisher MAP filter based on structural information detection can suppress speckle in homogenous and edge regions, and effectively preserve fine details, edges, and point targets.","PeriodicalId":50172,"journal":{"name":"Journal of Internet Technology","volume":"22 1","pages":"413-421"},"PeriodicalIF":1.6,"publicationDate":"2021-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42885775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-30DOI: 10.3966/160792642021032202020
Jones Sai-Wang Wan, Shenglin Wang
Stream data processing has become an important issue in the last decade. Data streams are generated on the fly and possibly change their data distribution over time. Data stream processing requires some mechanisms or methods to adapt to the changes of data distribution, which is called the concept drift. Concept drift detection can be challenging due to the data labels are not known. In this paper, we propose a drift detection method based on the statistical test with clustering and feature extraction as preprocessing. The goal is to reduce the detection time with principal component analysis (PCA) for the feature extraction method. Experimental results on synthetic and real-world streaming data show that the clustering preprocessing improve the performance of the drift detection and feature extraction trade-off an insignificant performance of detection for speedup for the execution time.
{"title":"Concept Drift Detection Based on Pre-Clustering and Statistical Testing","authors":"Jones Sai-Wang Wan, Shenglin Wang","doi":"10.3966/160792642021032202020","DOIUrl":"https://doi.org/10.3966/160792642021032202020","url":null,"abstract":"Stream data processing has become an important issue in the last decade. Data streams are generated on the fly and possibly change their data distribution over time. Data stream processing requires some mechanisms or methods to adapt to the changes of data distribution, which is called the concept drift. Concept drift detection can be challenging due to the data labels are not known. In this paper, we propose a drift detection method based on the statistical test with clustering and feature extraction as preprocessing. The goal is to reduce the detection time with principal component analysis (PCA) for the feature extraction method. Experimental results on synthetic and real-world streaming data show that the clustering preprocessing improve the performance of the drift detection and feature extraction trade-off an insignificant performance of detection for speedup for the execution time.","PeriodicalId":50172,"journal":{"name":"Journal of Internet Technology","volume":"22 1","pages":"465-472"},"PeriodicalIF":1.6,"publicationDate":"2021-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48819622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}