Pub Date : 2023-11-01DOI: 10.53106/160792642023112406010
Shyi-Tsong Wu Shyi-Tsong Wu
Linear Feedback Shift Register (LFSR) is the basic hardware of stream cipher, and Feedback with Carry Shift Register (FCSR) is the nonlinear analogues of LFSR. FCSR is a feedback architecture to generate long pseudorandom sequence. In this paper, we study the characteristics of FCSRs combined with nonlinear circuits such as Dawson’s Summation Generator (DSG), lp-Geffe generator and etc. Then we proposed a hybrid FCSR applying DSG and lp-Geffe generator as nonlinear combining elements to increase the period and the linear complexity of the output sequence. In addition, we further investigate the period, linear complexity, randomness, and use known attacks to verify the security strength of the proposed keystream generator. The pass rates of the proposed scheme are 100% for FIPS PUB 140-1 random tests, and at least 98% for SP800-22 random test, respectively.
{"title":"Hybrid FCSR Based Stream Cipher for Secure Communications in IoT","authors":"Shyi-Tsong Wu Shyi-Tsong Wu","doi":"10.53106/160792642023112406010","DOIUrl":"https://doi.org/10.53106/160792642023112406010","url":null,"abstract":"Linear Feedback Shift Register (LFSR) is the basic hardware of stream cipher, and Feedback with Carry Shift Register (FCSR) is the nonlinear analogues of LFSR. FCSR is a feedback architecture to generate long pseudorandom sequence. In this paper, we study the characteristics of FCSRs combined with nonlinear circuits such as Dawson’s Summation Generator (DSG), lp-Geffe generator and etc. Then we proposed a hybrid FCSR applying DSG and lp-Geffe generator as nonlinear combining elements to increase the period and the linear complexity of the output sequence. In addition, we further investigate the period, linear complexity, randomness, and use known attacks to verify the security strength of the proposed keystream generator. The pass rates of the proposed scheme are 100% for FIPS PUB 140-1 random tests, and at least 98% for SP800-22 random test, respectively.","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139299580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.53106/160792642023112406001
Ying Liu Ying Liu, Yong Li Ying Liu, Ming Wen Yong Li, Wenjing Zhang Ming Wen
Federated learning is a privacy-preserving machine learning technique that coordinates multi-participant co-modeling. It can alleviate the privacy issues of software defect prediction, which is an important technical way to ensure software quality. In this work, we implement Federated Software Defect Prediction (FedSDP) and optimize its privacy issues while guaranteeing performance. We first construct a new benchmark to study the performance and privacy of Federated Software defect prediction. The benchmark consists of (1) 12 NASA software defect datasets, which are all real software defect datasets from different projects in different domains, (2) Horizontal federated learning scenarios, and (3) the Federated Software Defect Prediction algorithm (FedSDP). Benchmark analysis shows that FedSDP provides additional privacy protection and security with guaranteed model performance compared to local training. It also reveals that FedSDP introduces a large amount of model parameter computation and exchange during the training process. There are model user threats and attack challenges from unreliable participants. To provide more reliable privacy protection without losing prediction performance we proposed optimization methods that use homomorphic encryption model parameters to resist honest but curious participants. Experimental results show that our approach achieves more reliable privacy protection with excellent performance on all datasets.
联合学习是一种保护隐私的机器学习技术,可协调多方共同建模。它可以缓解软件缺陷预测的隐私问题,而软件缺陷预测是确保软件质量的重要技术手段。在这项工作中,我们实现了联合软件缺陷预测(FedSDP),并在保证性能的同时优化了其隐私问题。我们首先构建了一个新的基准来研究联邦软件缺陷预测的性能和隐私问题。该基准包括:(1)12 个 NASA 软件缺陷数据集,它们都是来自不同领域不同项目的真实软件缺陷数据集;(2)水平联合学习场景;(3)联合软件缺陷预测算法(FedSDP)。基准分析表明,与本地训练相比,FedSDP 提供了额外的隐私保护和安全性,并保证了模型性能。它还显示,FedSDP 在训练过程中引入了大量的模型参数计算和交换。这其中存在模型用户威胁和来自不可靠参与者的攻击挑战。为了在不损失预测性能的情况下提供更可靠的隐私保护,我们提出了使用同态加密模型参数来抵御诚实但好奇的参与者的优化方法。实验结果表明,我们的方法实现了更可靠的隐私保护,在所有数据集上都表现出色。
{"title":"Privacy Protection Optimization for Federated Software Defect Prediction via Benchmark Analysis","authors":"Ying Liu Ying Liu, Yong Li Ying Liu, Ming Wen Yong Li, Wenjing Zhang Ming Wen","doi":"10.53106/160792642023112406001","DOIUrl":"https://doi.org/10.53106/160792642023112406001","url":null,"abstract":"Federated learning is a privacy-preserving machine learning technique that coordinates multi-participant co-modeling. It can alleviate the privacy issues of software defect prediction, which is an important technical way to ensure software quality. In this work, we implement Federated Software Defect Prediction (FedSDP) and optimize its privacy issues while guaranteeing performance. We first construct a new benchmark to study the performance and privacy of Federated Software defect prediction. The benchmark consists of (1) 12 NASA software defect datasets, which are all real software defect datasets from different projects in different domains, (2) Horizontal federated learning scenarios, and (3) the Federated Software Defect Prediction algorithm (FedSDP). Benchmark analysis shows that FedSDP provides additional privacy protection and security with guaranteed model performance compared to local training. It also reveals that FedSDP introduces a large amount of model parameter computation and exchange during the training process. There are model user threats and attack challenges from unreliable participants. To provide more reliable privacy protection without losing prediction performance we proposed optimization methods that use homomorphic encryption model parameters to resist honest but curious participants. Experimental results show that our approach achieves more reliable privacy protection with excellent performance on all datasets.","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139301552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A novel semi-supervised software defect prediction model FFeSSTri (Filtered Feature Selecting, Sample and Tri-training) is proposed to address the problem that class imbalance and too many irrelevant or redundant features in labelled samples lower the accuracy of semi-supervised software defect prediction. Its innovation lies in that the construction of FFeSSTri integrates an oversampling technique, a new feature selection method, and a Tri-training algorithm, thus it can effectively improve the accuracy. Firstly, the oversampling technique is applied to expand the class of inadequate samples, thus it solves the unbalanced classification of the labelled samples. Secondly, a new filtered feature selection method based on relevance and redundancy is proposed, which can exclude those irrelevant or redundant features from labelled samples. Finally, the Tri-training algorithm is used to learn the labelled training samples to build the defect prediction model FFeSSTri. The experiments conducted on the NASA software defect prediction dataset show that FFeSSTri outperforms the existing four supervised learning methods and one semi-supervised learning method in terms of F-Measure values and AUC values.
{"title":"An Integrated Semi-supervised Software Defect Prediction Model","authors":"Fanqi Meng Fanqi Meng, Wenying Cheng Fanqi Meng, Jingdong Wang Wenying Cheng","doi":"10.53106/160792642023112406013","DOIUrl":"https://doi.org/10.53106/160792642023112406013","url":null,"abstract":"A novel semi-supervised software defect prediction model FFeSSTri (Filtered Feature Selecting, Sample and Tri-training) is proposed to address the problem that class imbalance and too many irrelevant or redundant features in labelled samples lower the accuracy of semi-supervised software defect prediction. Its innovation lies in that the construction of FFeSSTri integrates an oversampling technique, a new feature selection method, and a Tri-training algorithm, thus it can effectively improve the accuracy. Firstly, the oversampling technique is applied to expand the class of inadequate samples, thus it solves the unbalanced classification of the labelled samples. Secondly, a new filtered feature selection method based on relevance and redundancy is proposed, which can exclude those irrelevant or redundant features from labelled samples. Finally, the Tri-training algorithm is used to learn the labelled training samples to build the defect prediction model FFeSSTri. The experiments conducted on the NASA software defect prediction dataset show that FFeSSTri outperforms the existing four supervised learning methods and one semi-supervised learning method in terms of F-Measure values and AUC values.","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"23 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139306134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.53106/160792642023112406006
Yang Deng Yang Deng, Bangchao Wang Yang Deng, Zhongyuan Hua Bangchao Wang, Yong Xiao Zhongyuan Hua, Xingfu Li Yong Xiao
In recent years, there has been increasing interest in using knowledge graphs (KGs) to help stakeholders organize and better understand the connections between various artifacts during software development. However, extracting entities and relationships automatically and accurately in open-source projects is still a challenge. Therefore, an efficient method called Concise Annotated JavaParser (CAJP) has been proposed to support these extraction activities, which are vitally important for KG construction. The experimental result shows that CAJP improves the accuracy and type of entity extraction and ensures the accuracy of relationship exaction. Moreover, an intelligent question-and-answer (Q&A) system is designed to visualize and verify the quality of the KGs constructed from six open-source projects. Overall, the software project-oriented KG provides developers a valuable and intuitive way to access and understand project information.
近年来,人们对使用知识图谱(KG)来帮助利益相关者组织和更好地理解软件开发过程中各种工件之间的联系越来越感兴趣。然而,在开源项目中自动、准确地提取实体和关系仍然是一项挑战。因此,我们提出了一种名为简明注释 JavaParser(CAJP)的高效方法来支持这些提取活动,这对于构建 KG 至关重要。实验结果表明,CAJP 提高了实体提取的准确性和类型,并确保了关系排序的准确性。此外,还设计了一个智能问答(Q&A)系统,用于可视化和验证从六个开源项目中构建的 KG 的质量。总之,面向软件项目的 KG 为开发人员提供了一种访问和理解项目信息的有价值的直观方式。
{"title":"A Knowledge Graph Construction Method for Software Project Based on CAJP","authors":"Yang Deng Yang Deng, Bangchao Wang Yang Deng, Zhongyuan Hua Bangchao Wang, Yong Xiao Zhongyuan Hua, Xingfu Li Yong Xiao","doi":"10.53106/160792642023112406006","DOIUrl":"https://doi.org/10.53106/160792642023112406006","url":null,"abstract":"In recent years, there has been increasing interest in using knowledge graphs (KGs) to help stakeholders organize and better understand the connections between various artifacts during software development. However, extracting entities and relationships automatically and accurately in open-source projects is still a challenge. Therefore, an efficient method called Concise Annotated JavaParser (CAJP) has been proposed to support these extraction activities, which are vitally important for KG construction. The experimental result shows that CAJP improves the accuracy and type of entity extraction and ensures the accuracy of relationship exaction. Moreover, an intelligent question-and-answer (Q&A) system is designed to visualize and verify the quality of the KGs constructed from six open-source projects. Overall, the software project-oriented KG provides developers a valuable and intuitive way to access and understand project information.","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139296933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.53106/160792642023112406004
Feng Wang Feng Wang, Jing Zheng Feng Wang, Jiawei Zeng Jing Zheng, Xincong Zhong Jiawei Zeng, Zhao Li Xincong Zhong
The current emergence of deep learning has enabled state-of-the-art approaches to achieve a major breakthrough in various fields such as object detection. However, the popular object detection algorithms like YOLOv3, YOLOv4 and YOLOv5 are computationally inefficient and need to consume a lot of computing resources. The experimental results on our fish datasets show that YOLOv5x has a great performance at accuracy which the best mean average precision (mAP) can reach 90.07% and YOLOv5s is conspicuous in recognition speed compared to other models. In this paper, a lighter object detection model based on YOLOv5(Referred to as S2F-YOLO) is proposed to overcome these deficiencies. Under the premise of ensuring a small loss of accuracy, the object recognition speed is greatly accelerated. The S2F-YOLO is applied to commercial fish species detection and the other popular algorithms comparison, we obtained incredible results when the mAP is 2.24% lower than that of YOLOv5x, the FPS reaches 216M, which is nearly half faster than YOLOv5s. When compared with other detectors, our algorithm also shows better overall performance, which is more suitable for actual applications.
{"title":"S2F-YOLO: An Optimized Object Detection Technique for Improving Fish Classification","authors":"Feng Wang Feng Wang, Jing Zheng Feng Wang, Jiawei Zeng Jing Zheng, Xincong Zhong Jiawei Zeng, Zhao Li Xincong Zhong","doi":"10.53106/160792642023112406004","DOIUrl":"https://doi.org/10.53106/160792642023112406004","url":null,"abstract":"The current emergence of deep learning has enabled state-of-the-art approaches to achieve a major breakthrough in various fields such as object detection. However, the popular object detection algorithms like YOLOv3, YOLOv4 and YOLOv5 are computationally inefficient and need to consume a lot of computing resources. The experimental results on our fish datasets show that YOLOv5x has a great performance at accuracy which the best mean average precision (mAP) can reach 90.07% and YOLOv5s is conspicuous in recognition speed compared to other models. In this paper, a lighter object detection model based on YOLOv5(Referred to as S2F-YOLO) is proposed to overcome these deficiencies. Under the premise of ensuring a small loss of accuracy, the object recognition speed is greatly accelerated. The S2F-YOLO is applied to commercial fish species detection and the other popular algorithms comparison, we obtained incredible results when the mAP is 2.24% lower than that of YOLOv5x, the FPS reaches 216M, which is nearly half faster than YOLOv5s. When compared with other detectors, our algorithm also shows better overall performance, which is more suitable for actual applications.","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"17 1-4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139297019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.53106/160792642023112406007
Jiakang Tang Jiakang Tang, Lin Cui Jiakang Tang, Zhiwei Zhang Lin Cui
In face image recognition, labels play a fairly important role in recognition and classification, and rich and perfect labels can greatly improve the accuracy rate. However, it is almost impossible for the labels in the image to be recognized to describe the image completely and accurately. At the same time, the data obtained when feature extraction is performed on an image inevitably extracts a large amount of redundant and useless information at the same time, which affects the generalization performance of the model. Accordingly, we propose a face image recognition algorithm based on label completion in multi label learning. First, the SVD algorithm is used to remove redundant and useless information from the features of the original data by dimensionality reduction operation to obtain simplified sample attribute information, and the label completion algorithm is used to supplement the labels of the images using the extracted feature information. Finally the obtained label data as complete as possible is put into the extreme learning machine to construct the face recognition model and give the prediction results of the images. Experiments on the ORL dataset demonstrate that the algorithm can achieve good recognition results.
{"title":"Face Image Recognition Algorithm Based on Label Complementation","authors":"Jiakang Tang Jiakang Tang, Lin Cui Jiakang Tang, Zhiwei Zhang Lin Cui","doi":"10.53106/160792642023112406007","DOIUrl":"https://doi.org/10.53106/160792642023112406007","url":null,"abstract":"In face image recognition, labels play a fairly important role in recognition and classification, and rich and perfect labels can greatly improve the accuracy rate. However, it is almost impossible for the labels in the image to be recognized to describe the image completely and accurately. At the same time, the data obtained when feature extraction is performed on an image inevitably extracts a large amount of redundant and useless information at the same time, which affects the generalization performance of the model. Accordingly, we propose a face image recognition algorithm based on label completion in multi label learning. First, the SVD algorithm is used to remove redundant and useless information from the features of the original data by dimensionality reduction operation to obtain simplified sample attribute information, and the label completion algorithm is used to supplement the labels of the images using the extracted feature information. Finally the obtained label data as complete as possible is put into the extreme learning machine to construct the face recognition model and give the prediction results of the images. Experiments on the ORL dataset demonstrate that the algorithm can achieve good recognition results.","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"85 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139298228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.53106/160792642023112406009
Hao Yan Hao Yan, Yanan Liu Hao Yan, Dandan Huang Yanan Liu, Shuo Qiu Dandan Huang, Zheng Zhang Shuo Qiu
To improve the security of the data on cloud storage, numbers of data integrity auditing schemes have been proposed in the past several years. However, there only a few schemes considered the security challenge that the user’s key is exposed unknowingly which is very likely to happen in real-life. To cope with the problem, we propose a public data integrity auditing scheme for cloud storage with efficient key updating. In our scheme, the user’s key is updated periodically to resist the risk of key exposure. Meanwhile, the authentication tags of blocks are updated simultaneously with the key updating so as to guarantee the data integrity can be verified normally. The algorithm of key updating in our scheme is very efficient which only needs a hash operation while previous schemes need two or three exponentiation operations. Moreover, the workload of tag updating is undertaken by cloud servers with a re-tag-key which reduces the burden of users and improves the efficiency of the scheme. The communication cost of the scheme is also reduced greatly, for instance, the information size in ‘re-key’ step is decreased from two group members to one. Furthermore, we give the formal security model of our scheme and prove the security under the CDH assumption. The experimental results show that our proposal is efficient and feasible.
{"title":"Public Integrity Verification for Cloud Storage with Efficient Key-update","authors":"Hao Yan Hao Yan, Yanan Liu Hao Yan, Dandan Huang Yanan Liu, Shuo Qiu Dandan Huang, Zheng Zhang Shuo Qiu","doi":"10.53106/160792642023112406009","DOIUrl":"https://doi.org/10.53106/160792642023112406009","url":null,"abstract":"To improve the security of the data on cloud storage, numbers of data integrity auditing schemes have been proposed in the past several years. However, there only a few schemes considered the security challenge that the user’s key is exposed unknowingly which is very likely to happen in real-life. To cope with the problem, we propose a public data integrity auditing scheme for cloud storage with efficient key updating. In our scheme, the user’s key is updated periodically to resist the risk of key exposure. Meanwhile, the authentication tags of blocks are updated simultaneously with the key updating so as to guarantee the data integrity can be verified normally. The algorithm of key updating in our scheme is very efficient which only needs a hash operation while previous schemes need two or three exponentiation operations. Moreover, the workload of tag updating is undertaken by cloud servers with a re-tag-key which reduces the burden of users and improves the efficiency of the scheme. The communication cost of the scheme is also reduced greatly, for instance, the information size in ‘re-key’ step is decreased from two group members to one. Furthermore, we give the formal security model of our scheme and prove the security under the CDH assumption. The experimental results show that our proposal is efficient and feasible.","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139292420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Small satellites have the outstanding advantages of flexible reconfiguration and strong system robustness through large-scale network operation, which has attracted attention at domestic and overseas in recent years. However, how to solve the scheduling problem in large-scale satellite constellation/cluster production is always the key to increasing the volume production of satellites. In this paper, the existing production line framework and the critical technologies of intelligent manufacturing are analyzed, and the intelligent production line flow is proposed. Based on the establishment of the job shop scheduling (JSP) model, the Interference of multi-model scheduling is classified, and by improving the dynamic scheduling strategy of the dual population genetic algorithm, we solve the multi-model scheduling problem. The simulation results show that the scheduling scheme can minimize the influence of interference events on the schedule, which proves the superiority and effectiveness of the scheduling strategy.
{"title":"Multi-Interference and Multi-Model Dynamic Scheduling of the Small Satellite Based on Dual Population Genetic Algorithm","authors":"Hailong Yang Hailong Yang, Tian Xia Hailong Yang, Zeyu Xia Tian Xia, Dayong Zhai Zeyu Xia","doi":"10.53106/160792642023112406003","DOIUrl":"https://doi.org/10.53106/160792642023112406003","url":null,"abstract":"Small satellites have the outstanding advantages of flexible reconfiguration and strong system robustness through large-scale network operation, which has attracted attention at domestic and overseas in recent years. However, how to solve the scheduling problem in large-scale satellite constellation/cluster production is always the key to increasing the volume production of satellites. In this paper, the existing production line framework and the critical technologies of intelligent manufacturing are analyzed, and the intelligent production line flow is proposed. Based on the establishment of the job shop scheduling (JSP) model, the Interference of multi-model scheduling is classified, and by improving the dynamic scheduling strategy of the dual population genetic algorithm, we solve the multi-model scheduling problem. The simulation results show that the scheduling scheme can minimize the influence of interference events on the schedule, which proves the superiority and effectiveness of the scheduling strategy.","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139303361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.53106/160792642023112406008
Yanxiang Geng Yanxiang Geng, Liyi Zhang Yanxiang Geng, Yong Zhang Liyi Zhang, Zhixing Li Yong Zhang, Jiahui Li Zhixing Li
Bat algorithm has good global search ability, but it has some problems, such as slow convergence speed in local search stage, low convergence accuracy, easy to fall into local optimization and can not escape. In view of the above defects, inspired by Harris Hawks’s strategy of catching rabbits, this paper introduces the surrounding mechanism of prey, which can quickly approach the food and judge its quality, so as to achieve the purpose of rapid convergence and improve the convergence accuracy. The experiment shows that the improved algorithm of the fast diving strategy is tested by using the test function, and compared with the basic bat algorithm, backtracking bat algorithm and HABC. The improved bat algorithm of the fast diving strategy has better optimization accuracy, faster convergence speed, simple algorithm and higher success rate.
{"title":"Improved Bat Algorithm Based on Fast Diving Strategy","authors":"Yanxiang Geng Yanxiang Geng, Liyi Zhang Yanxiang Geng, Yong Zhang Liyi Zhang, Zhixing Li Yong Zhang, Jiahui Li Zhixing Li","doi":"10.53106/160792642023112406008","DOIUrl":"https://doi.org/10.53106/160792642023112406008","url":null,"abstract":"Bat algorithm has good global search ability, but it has some problems, such as slow convergence speed in local search stage, low convergence accuracy, easy to fall into local optimization and can not escape. In view of the above defects, inspired by Harris Hawks’s strategy of catching rabbits, this paper introduces the surrounding mechanism of prey, which can quickly approach the food and judge its quality, so as to achieve the purpose of rapid convergence and improve the convergence accuracy. The experiment shows that the improved algorithm of the fast diving strategy is tested by using the test function, and compared with the basic bat algorithm, backtracking bat algorithm and HABC. The improved bat algorithm of the fast diving strategy has better optimization accuracy, faster convergence speed, simple algorithm and higher success rate.","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"68 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139291326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.53106/160792642023112406014
Jian Li Jian Li, Zhen Zhang Jian Li, Yue Pan Zhen Zhang, Ming-Yu Gu Yue Pan, Guang-Jie Han Ming-Yu Gu
This paper proposes a new method for sound ray correction based on historical data, such as temperature, salinity, and depth of the sea area. The proposed method utilizes the Douglas-Peucker (D-P) algorithm to mine and extract features from sound velocity data processed using empirical orthogonal functions (EOF), completing the inversion of sound speed profiles (SSP). Compared to traditional EOF methods, an increase in the computational speed is achieved. Afterwards, this method quickly and linearly layers the processed sound speed profile, and uses the equivalent sound velocity method (ESVM) for sound ray equivalence to complete underwater target localization. Compared to the constant velocity method and the constant gradient method based on adaptive layering, the proposed method has higher accuracy and higher robustness to complex underwater environments. The effectiveness of the method is verified by applying it to the ultra-short baseline (USBL) positioning system.
{"title":"A Sound Ray Correction Method Based on Historical Data of Marine Acoustic Environment","authors":"Jian Li Jian Li, Zhen Zhang Jian Li, Yue Pan Zhen Zhang, Ming-Yu Gu Yue Pan, Guang-Jie Han Ming-Yu Gu","doi":"10.53106/160792642023112406014","DOIUrl":"https://doi.org/10.53106/160792642023112406014","url":null,"abstract":"This paper proposes a new method for sound ray correction based on historical data, such as temperature, salinity, and depth of the sea area. The proposed method utilizes the Douglas-Peucker (D-P) algorithm to mine and extract features from sound velocity data processed using empirical orthogonal functions (EOF), completing the inversion of sound speed profiles (SSP). Compared to traditional EOF methods, an increase in the computational speed is achieved. Afterwards, this method quickly and linearly layers the processed sound speed profile, and uses the equivalent sound velocity method (ESVM) for sound ray equivalence to complete underwater target localization. Compared to the constant velocity method and the constant gradient method based on adaptive layering, the proposed method has higher accuracy and higher robustness to complex underwater environments. The effectiveness of the method is verified by applying it to the ultra-short baseline (USBL) positioning system.","PeriodicalId":442331,"journal":{"name":"網際網路技術學刊","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139291726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}