Songhai Lin, Hong Xiao, Wenjun Jiang, Dafeng Li, Jiaben Liang, Zelin Li
The automation of data analysis in the form of scientific workflows has become a widely adopted practice in many fields of research. Data-intensive experiments using workflows enabled automation and provenance support, which contribute to alleviating the reproducibility crisis. This paper investigates the existing provenance models as well as scientific workflow applications. Furthermore, here we not only summarize the models at different levels, but also compare the applications, particularly the blockchain applied to the provenance in scientific workflows. After that, a new design of secure provenance system is proposed. Provenance that would be enabled by the emerging technology is also discussed at the end.
{"title":"A survey of provenance in scientific workflow","authors":"Songhai Lin, Hong Xiao, Wenjun Jiang, Dafeng Li, Jiaben Liang, Zelin Li","doi":"10.3233/jhs-222017","DOIUrl":"https://doi.org/10.3233/jhs-222017","url":null,"abstract":"The automation of data analysis in the form of scientific workflows has become a widely adopted practice in many fields of research. Data-intensive experiments using workflows enabled automation and provenance support, which contribute to alleviating the reproducibility crisis. This paper investigates the existing provenance models as well as scientific workflow applications. Furthermore, here we not only summarize the models at different levels, but also compare the applications, particularly the blockchain applied to the provenance in scientific workflows. After that, a new design of secure provenance system is proposed. Provenance that would be enabled by the emerging technology is also discussed at the end.","PeriodicalId":54809,"journal":{"name":"Journal of High Speed Networks","volume":"1 1","pages":"129-145"},"PeriodicalIF":0.9,"publicationDate":"2023-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83041735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional crafts such as tategu (door and window fittings), kimono (clothes), and shikki (lacquerware) are regarded as important items for daily life in Japan; however, in recent years, the industry that manufactures and sells these products has experienced various problems and has continued to decline. Conversely, overseas demand for traditional crafts has been gradually increasing with the increase in foreign visitors to Japan in recent years. For these reasons, there is a need in the traditional crafts industry to provide information and promote traditional crafts to overseas customers. Accordingly, in this study, we implement a high presence traditional crafts experience system using virtual reality technology. Our proposed system provides users with a highly realistic virtual space experience using a head-mounted display and a data glove. In addition, multiple users can share the space via the network. Further, we consider the promotion of traditional crafts to overseas customers by combining multicultural architectural styles and Japanese culture.
{"title":"A high presence traditional crafts experience system that combines multicultural architectural styles with Japanese culture","authors":"Tomoyuki Ishida, Yangzhicheng Lu","doi":"10.3233/jhs-222074","DOIUrl":"https://doi.org/10.3233/jhs-222074","url":null,"abstract":"Traditional crafts such as tategu (door and window fittings), kimono (clothes), and shikki (lacquerware) are regarded as important items for daily life in Japan; however, in recent years, the industry that manufactures and sells these products has experienced various problems and has continued to decline. Conversely, overseas demand for traditional crafts has been gradually increasing with the increase in foreign visitors to Japan in recent years. For these reasons, there is a need in the traditional crafts industry to provide information and promote traditional crafts to overseas customers. Accordingly, in this study, we implement a high presence traditional crafts experience system using virtual reality technology. Our proposed system provides users with a highly realistic virtual space experience using a head-mounted display and a data glove. In addition, multiple users can share the space via the network. Further, we consider the promotion of traditional crafts to overseas customers by combining multicultural architectural styles and Japanese culture.","PeriodicalId":54809,"journal":{"name":"Journal of High Speed Networks","volume":"102 1","pages":"183-196"},"PeriodicalIF":0.9,"publicationDate":"2023-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75632602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Semantic matching is one of the critical technologies for intelligent customer service. Since Bidirectional Encoder Representations from Transformers (BERT) is proposed, fine-tuning on a large-scale pre-training language model becomes a general method to implement text semantic matching. However, in practical application, the accuracy of the BERT model is limited by the quantity of pre-training corpus and proper nouns in the target domain. An enhancement method for knowledge based on domain dictionary to mask input is proposed to solve the problem. Firstly, for modul input, we use keyword matching to recognize and mask the word in domain. Secondly, using self-supervised learning to inject knowledge of the target domain into the BERT model. Thirdly, we fine-tune the BERT model with public datasets LCQMC and BQboost. Finally, we test the model’s performance with a financial company’s user data. The experimental results show that after using our method and BQboost, accuracy increases by 12.12% on average in practical applications.
语义匹配是实现智能客户服务的关键技术之一。自BERT (Bidirectional Encoder Representations from Transformers)提出以来,对大规模预训练语言模型进行微调成为实现文本语义匹配的通用方法。然而,在实际应用中,BERT模型的准确性受到预训练语料库数量和目标领域专有名词数量的限制。针对这一问题,提出了一种基于领域词典的知识增强方法来屏蔽输入。首先,对于模块输入,我们使用关键字匹配来识别和屏蔽域内的单词。其次,利用自监督学习将目标领域的知识注入BERT模型。第三,我们使用公共数据集立法会mc和BQboost对BERT模型进行微调。最后,我们用一家金融公司的用户数据来测试模型的性能。实验结果表明,在实际应用中,我们的方法与BQboost结合使用后,精度平均提高了12.12%。
{"title":"Knowledge enhancement BERT based on domain dictionary mask","authors":"Xianglin Cao, Hong Xiao, Wenjun Jiang","doi":"10.3233/jhs-222013","DOIUrl":"https://doi.org/10.3233/jhs-222013","url":null,"abstract":"Semantic matching is one of the critical technologies for intelligent customer service. Since Bidirectional Encoder Representations from Transformers (BERT) is proposed, fine-tuning on a large-scale pre-training language model becomes a general method to implement text semantic matching. However, in practical application, the accuracy of the BERT model is limited by the quantity of pre-training corpus and proper nouns in the target domain. An enhancement method for knowledge based on domain dictionary to mask input is proposed to solve the problem. Firstly, for modul input, we use keyword matching to recognize and mask the word in domain. Secondly, using self-supervised learning to inject knowledge of the target domain into the BERT model. Thirdly, we fine-tune the BERT model with public datasets LCQMC and BQboost. Finally, we test the model’s performance with a financial company’s user data. The experimental results show that after using our method and BQboost, accuracy increases by 12.12% on average in practical applications.","PeriodicalId":54809,"journal":{"name":"Journal of High Speed Networks","volume":"9 1","pages":"121-128"},"PeriodicalIF":0.9,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75216624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guangsi Xiong, Ping Li, Hanlin Zeng, Hong Xiao, Wenjun Jiang
Fault diagnosis is an important link in intelligent development of industrial robots. Aiming at the problem of weak fault diagnosis performance caused by insufficient training samples, a fault diagnosis model based on triplet network is proposed. Firstly, we combine the multiscale convolutional neural network (MSCNN) with channel attention networks (squeeze-and-excitation network, SENet), and use it to construct a triple sub-network structure MS-SECNN, which can adaptively extract features from the original fault signal. Then, the feature similarity is calculated by triplet loss in the low dimensional space to realize the fault classification task. The experiments are based on the real industrial robot operation data set. In this model, we use Few-shot learning strategy to test the diagnostic performance under small samples, and compare it with WDCNN, FDCNN and MSCNN models. Experimental results show that the proposed model has more effective fault classification ability under small samples. In addition, when the training sample size is 1400, the average accuracy of MS-SECNN reaches 99.21%.
{"title":"Fault diagnosis model of multi-axis industrial robot based on triplet network","authors":"Guangsi Xiong, Ping Li, Hanlin Zeng, Hong Xiao, Wenjun Jiang","doi":"10.3233/jhs-222014","DOIUrl":"https://doi.org/10.3233/jhs-222014","url":null,"abstract":"Fault diagnosis is an important link in intelligent development of industrial robots. Aiming at the problem of weak fault diagnosis performance caused by insufficient training samples, a fault diagnosis model based on triplet network is proposed. Firstly, we combine the multiscale convolutional neural network (MSCNN) with channel attention networks (squeeze-and-excitation network, SENet), and use it to construct a triple sub-network structure MS-SECNN, which can adaptively extract features from the original fault signal. Then, the feature similarity is calculated by triplet loss in the low dimensional space to realize the fault classification task. The experiments are based on the real industrial robot operation data set. In this model, we use Few-shot learning strategy to test the diagnostic performance under small samples, and compare it with WDCNN, FDCNN and MSCNN models. Experimental results show that the proposed model has more effective fault classification ability under small samples. In addition, when the training sample size is 1400, the average accuracy of MS-SECNN reaches 99.21%.","PeriodicalId":54809,"journal":{"name":"Journal of High Speed Networks","volume":"53 1","pages":"75-83"},"PeriodicalIF":0.9,"publicationDate":"2022-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76050976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In modern society, multi-agent consensus is applied in many applications such as distributed machine learning, wireless sensor networks and so on. However, some agents might behave abnormally subject to external attack or internal faults, and thus fault-tolerant consensus problem is studied recently, among which Q-consensus is one of the state-of-the-art and effective methods to identify all the faulty agents and achieve consensus for normal agents in general networks. To fight against Q-consensus algorithm, this paper proposes a novel strategy, called split attack, which is simple but capable of breaking consensus convergence. By aggregating all the states of neighboring nodes with an extra perturbation, the normal nodes are split into sub-groups and converge to two separate values, so that consensus is broken. Two scenarios, including the introduction of additional faulty nodes and compromise of the original nodes, are considered. More specifically, in the former case, two additional faulty nodes are adopted, each of which is responsible to mislead parts of the normal nodes. While in the latter one, two original normal nodes are compromised to mislead the whole system. Moreover, the compromised nodes selection is fundamentally a classification problem, and thus optimized through CNN. Finally, the numerical simulations are provided to verify the proposed schemes and indicate that the proposed method outperforms other attack methods.
{"title":"An anti-consensus strategy based on continuous perturbation updates in opposite directions","authors":"Yujie Xie, Xintong Liang, Yifan Huang, Jian Hou, Yubo Jia","doi":"10.3233/jhs-220001","DOIUrl":"https://doi.org/10.3233/jhs-220001","url":null,"abstract":"In modern society, multi-agent consensus is applied in many applications such as distributed machine learning, wireless sensor networks and so on. However, some agents might behave abnormally subject to external attack or internal faults, and thus fault-tolerant consensus problem is studied recently, among which Q-consensus is one of the state-of-the-art and effective methods to identify all the faulty agents and achieve consensus for normal agents in general networks. To fight against Q-consensus algorithm, this paper proposes a novel strategy, called split attack, which is simple but capable of breaking consensus convergence. By aggregating all the states of neighboring nodes with an extra perturbation, the normal nodes are split into sub-groups and converge to two separate values, so that consensus is broken. Two scenarios, including the introduction of additional faulty nodes and compromise of the original nodes, are considered. More specifically, in the former case, two additional faulty nodes are adopted, each of which is responsible to mislead parts of the normal nodes. While in the latter one, two original normal nodes are compromised to mislead the whole system. Moreover, the compromised nodes selection is fundamentally a classification problem, and thus optimized through CNN. Finally, the numerical simulations are provided to verify the proposed schemes and indicate that the proposed method outperforms other attack methods.","PeriodicalId":54809,"journal":{"name":"Journal of High Speed Networks","volume":"37 4 1","pages":"15-25"},"PeriodicalIF":0.9,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75372886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet of Vehicles (IoV) presents a new generation of vehicular communications with limited computation offloading, energy and memory resources with 5G/6G technologies that have grown enormously and are being used in wide variety of Intelligent Transportation Systems (ITS). Due to the limited battery power in smart vehicles, the concept of energy consumption is one of the main and critical challenges of the IoV environments. Optimizing resource management strategies for improving the energy consumption using AI-based methods is one of important solutions in the IoV environments. There are various machine learning algorithms for selecting optimal solutions for energy-efficient resource management strategies. This paper presents the existing energy-aware resource management strategies for the IoV case studies, and performs a comparative analysis among their applied AI-based methods and machine learning algorithms. This analysis presents a technical and deeper understanding of the technical aspects of existing machine learning and AI-based algorithms that will be helpful in design of new hybrid AI approaches for optimizing resource management strategies with reducing their energy consumption.
{"title":"Energy-aware resource management in Internet of vehicles using machine learning algorithms","authors":"Sichao Chen, Yuanchao Hu, Liejiang Huang, Dilong Shen, Yuanjun Pan, Ligang Pan","doi":"10.3233/jhs-222004","DOIUrl":"https://doi.org/10.3233/jhs-222004","url":null,"abstract":"Internet of Vehicles (IoV) presents a new generation of vehicular communications with limited computation offloading, energy and memory resources with 5G/6G technologies that have grown enormously and are being used in wide variety of Intelligent Transportation Systems (ITS). Due to the limited battery power in smart vehicles, the concept of energy consumption is one of the main and critical challenges of the IoV environments. Optimizing resource management strategies for improving the energy consumption using AI-based methods is one of important solutions in the IoV environments. There are various machine learning algorithms for selecting optimal solutions for energy-efficient resource management strategies. This paper presents the existing energy-aware resource management strategies for the IoV case studies, and performs a comparative analysis among their applied AI-based methods and machine learning algorithms. This analysis presents a technical and deeper understanding of the technical aspects of existing machine learning and AI-based algorithms that will be helpful in design of new hybrid AI approaches for optimizing resource management strategies with reducing their energy consumption.","PeriodicalId":54809,"journal":{"name":"Journal of High Speed Networks","volume":"44 1","pages":"27-39"},"PeriodicalIF":0.9,"publicationDate":"2022-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73681372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yunxuan Su, Xu An Wang, Weidong Du, Yunlong Ge, Kaiyang Zhao, M. Lv
With the development of big data technology, medical data has become increasingly important. It not only contains personal privacy information, but also involves medical security issues. This paper proposes a secure data fitting scheme based on CKKS (Cheon-Kim-Kim-Song) homomorphic encryption algorithm for medical IoT. The scheme encrypts the KGGLE-HDP (Heart Disease Prediction) dataset through CKKS homomorphic encryption, calculates the data’s weight and deviation. By using the gradient descent method, it calculates the weight and bias of the data. The experimental results show that under the KAGGLE-HDP dataset,we select the threshold value is 0.7 and the parameter setting is (Poly_modulus_degree, Coeff_mod_bit_sizes, Scale) = (16384; 43, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 43; 23), the number of iteration is 3 and the recognition accuracy of this scheme can achieve 96.7%. The scheme shows that it has a high recognition accuracy and better privacy protection than other data fitting schemes.
{"title":"A secure data fitting scheme based on CKKS homomorphic encryption for medical IoT","authors":"Yunxuan Su, Xu An Wang, Weidong Du, Yunlong Ge, Kaiyang Zhao, M. Lv","doi":"10.3233/jhs-222016","DOIUrl":"https://doi.org/10.3233/jhs-222016","url":null,"abstract":"With the development of big data technology, medical data has become increasingly important. It not only contains personal privacy information, but also involves medical security issues. This paper proposes a secure data fitting scheme based on CKKS (Cheon-Kim-Kim-Song) homomorphic encryption algorithm for medical IoT. The scheme encrypts the KGGLE-HDP (Heart Disease Prediction) dataset through CKKS homomorphic encryption, calculates the data’s weight and deviation. By using the gradient descent method, it calculates the weight and bias of the data. The experimental results show that under the KAGGLE-HDP dataset,we select the threshold value is 0.7 and the parameter setting is (Poly_modulus_degree, Coeff_mod_bit_sizes, Scale) = (16384; 43, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 43; 23), the number of iteration is 3 and the recognition accuracy of this scheme can achieve 96.7%. The scheme shows that it has a high recognition accuracy and better privacy protection than other data fitting schemes.","PeriodicalId":54809,"journal":{"name":"Journal of High Speed Networks","volume":"7 1","pages":"41-56"},"PeriodicalIF":0.9,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79331393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid development of social networks, studying and analyzing their structures and behaviors has become one of the most important requirements of businesses. Social network analysis can be used for many different purposes such as product ads, market orientation detection, influential members detection, predicting user behaviors, recommender systems improvements, etc. One of the newest research topics in social network analysis is the enhancement of the information propagation performance in different aspects based on application. In this paper, a new method is proposed to improve few metrics such as distribution time and precision on social networks. In this method, the local attributes of nodes and also the structural information of the network is used to forward data across the network and reduce the propagation time. First of all, the centrality and Assortativity are calculated for all nodes separately to select two sets of nodes with the highest values for both criteria. Then, the initial active nodes of the network are selected by calculating the intersection of the two sets. Next, the distribution paths are detected based on the initial active nodes to calculate the propagation time. The performance analysis results show that the proposed method has better outcomes in comparison to other state-of-the-art methods in terms of distribution time, precision, recall, and AUPR criteria.
{"title":"Data forwarding: A new VoteRank and Assortativity based approach to improve propagation time in social networks","authors":"Kasra Majbouri Yazdi, Jingyu Hou, Saeid Khodayi, Adel Majbouri Yazdi, Saeed Saedi, Wanlei Zhou","doi":"10.3233/jhs-220695","DOIUrl":"https://doi.org/10.3233/jhs-220695","url":null,"abstract":"With the rapid development of social networks, studying and analyzing their structures and behaviors has become one of the most important requirements of businesses. Social network analysis can be used for many different purposes such as product ads, market orientation detection, influential members detection, predicting user behaviors, recommender systems improvements, etc. One of the newest research topics in social network analysis is the enhancement of the information propagation performance in different aspects based on application. In this paper, a new method is proposed to improve few metrics such as distribution time and precision on social networks. In this method, the local attributes of nodes and also the structural information of the network is used to forward data across the network and reduce the propagation time. First of all, the centrality and Assortativity are calculated for all nodes separately to select two sets of nodes with the highest values for both criteria. Then, the initial active nodes of the network are selected by calculating the intersection of the two sets. Next, the distribution paths are detected based on the initial active nodes to calculate the propagation time. The performance analysis results show that the proposed method has better outcomes in comparison to other state-of-the-art methods in terms of distribution time, precision, recall, and AUPR criteria.","PeriodicalId":54809,"journal":{"name":"Journal of High Speed Networks","volume":"207 ","pages":"275-285"},"PeriodicalIF":0.9,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72435481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jagdeep Singh, S. K. Dhurandher, I. Woungang, L. Barolli
Opportunistic Delay Tolerant Networks also referred to as Opportunistic Networks (OppNets) are a subset of wireless networks having mobile nodes with discontinuous opportunistic connections. As such, developing a performant routing protocol in such an environment remains a challenge. Most research in the literature have shown that reinforcement learning-based routing algorithms can achieve a good routing performance, but these algorithms suffer from under-estimations and/or over-estimations. Toward addressing these shortcomings, in this paper, a Double Q-learning based routing protocol for Opportunistic Networks framework named Off-Policy Reinforcement-based Adaptive Learning (ORAL) is proposed, which selects the most suitable next-hop node to transmit the message toward its destination without any bias by using a weighted double Q-estimator. In the next-hop selection process, a probability-based reward mechanism is involved, which considers the node’s delivery probability and the frequency of encounters among the nodes to boost the protocol’s efficiency. Simulation results convey that the proposed ORAL protocol improves the message delivery ratio by maintaining a trade-off between underestimation and overestimation. Simulations are conducted using the HAGGLE INFOCOM 2006 real mobility data trace and synthetic model, showing that when time-to-live is varied, (1) the proposed ORAL scheme outperforms DQLR by 14.05%, 9.4%, 5.81% respectively in terms of delivery probability, overhead ratio and average delay; (2) it also outperforms RLPRoPHET by 16.17%, 9.2%, 6.85%, respectively in terms of delivery ratio, overhead ratio and average delay.
{"title":"Double Q-learning based routing protocol for opportunistic networks","authors":"Jagdeep Singh, S. K. Dhurandher, I. Woungang, L. Barolli","doi":"10.3233/jhs-222018","DOIUrl":"https://doi.org/10.3233/jhs-222018","url":null,"abstract":"Opportunistic Delay Tolerant Networks also referred to as Opportunistic Networks (OppNets) are a subset of wireless networks having mobile nodes with discontinuous opportunistic connections. As such, developing a performant routing protocol in such an environment remains a challenge. Most research in the literature have shown that reinforcement learning-based routing algorithms can achieve a good routing performance, but these algorithms suffer from under-estimations and/or over-estimations. Toward addressing these shortcomings, in this paper, a Double Q-learning based routing protocol for Opportunistic Networks framework named Off-Policy Reinforcement-based Adaptive Learning (ORAL) is proposed, which selects the most suitable next-hop node to transmit the message toward its destination without any bias by using a weighted double Q-estimator. In the next-hop selection process, a probability-based reward mechanism is involved, which considers the node’s delivery probability and the frequency of encounters among the nodes to boost the protocol’s efficiency. Simulation results convey that the proposed ORAL protocol improves the message delivery ratio by maintaining a trade-off between underestimation and overestimation. Simulations are conducted using the HAGGLE INFOCOM 2006 real mobility data trace and synthetic model, showing that when time-to-live is varied, (1) the proposed ORAL scheme outperforms DQLR by 14.05%, 9.4%, 5.81% respectively in terms of delivery probability, overhead ratio and average delay; (2) it also outperforms RLPRoPHET by 16.17%, 9.2%, 6.85%, respectively in terms of delivery ratio, overhead ratio and average delay.","PeriodicalId":54809,"journal":{"name":"Journal of High Speed Networks","volume":"15 1","pages":"1-14"},"PeriodicalIF":0.9,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87399989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the next generation wireless networks, the number of connected terminals to the network, communication protocols, and the channels available will be increased, thus network slicing will become more important. Also, vehicles, buses, trains and motorcycles are considered communication terminals. These communication terminals should have independent network management considering their movement such as joining and leaving the networks. Therefore, Delay-Disruption-Disconnection Tolerant Networking (DTN) has been attracting attention for their potential support of inter-vehicle communication. In this paper are presented the Contact-Time (CT) based and Adaptive-Timer (AT) based Message Suppression (MS) methods for Vehicular DTN. For the CT-based MS method are used three DTN protocols for Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communication. For AT-based MS are used conventional Epidemic and two proposed Epidemic-based methods for V2V communication. We compare MS method, Message Suppression Controller (MSC) and MSC with Adaptive Threshold (MSC-ATh). The simulation results show that MSC-ATh performs better than other approaches. The storage consumption is improved when the number of vehicles increases and there is no reduction in PDR even if the message suppression is enabled. For Epidemic, when the number of Road-Side Units (RSUs) is 16, the results of PDR are the best compared with other DTN protocols. The MSC-ATh method is about 22% better than Epidemic for storage consumption. Also, the delay performance of MSC-ATh is improved by increasing the Suppression Coefficients (SCs) and number of vehicles.
在下一代无线网络中,连接到网络的终端数量、通信协议和可用信道将会增加,因此网络切片将变得更加重要。此外,汽车、公共汽车、火车和摩托车也被视为通信终端。考虑到这些通信终端的入网、退网等活动,应该有独立的网络管理。因此,容忍延迟中断断开网络(Delay-Disruption-Disconnection tolerance Networking, DTN)因其对车辆间通信的潜在支持而备受关注。提出了基于接触时间(CT)和基于自适应定时器(AT)的车载DTN信息抑制方法。对于基于ct的MS方法,采用三种DTN协议进行车对车(V2V)和车对基础设施(V2I)通信。对于基于at的MS,分别采用了传统的Epidemic和两种基于Epidemic的V2V通信方法。我们比较了MS方法、消息抑制控制器(MSC)和带有自适应阈值的MSC (MSC- ath)。仿真结果表明,MSC-ATh算法的性能优于其他算法。当车辆数量增加时,存储消耗会得到改善,即使启用了消息抑制,PDR也不会减少。对于Epidemic,当Road-Side Units (rsu)的数量为16时,PDR与其他DTN协议相比效果最好。MSC-ATh法在存储消耗方面比Epidemic法高22%左右。同时,通过增加抑制系数(sc)和车辆数量,提高了MSC-ATh的延迟性能。
{"title":"Performance evaluation of contact-time based and adaptive-timer based message suppression methods for inter-vehicle communication in vehicular DTN","authors":"Makoto Ikeda","doi":"10.3233/jhs-222071","DOIUrl":"https://doi.org/10.3233/jhs-222071","url":null,"abstract":"In the next generation wireless networks, the number of connected terminals to the network, communication protocols, and the channels available will be increased, thus network slicing will become more important. Also, vehicles, buses, trains and motorcycles are considered communication terminals. These communication terminals should have independent network management considering their movement such as joining and leaving the networks. Therefore, Delay-Disruption-Disconnection Tolerant Networking (DTN) has been attracting attention for their potential support of inter-vehicle communication. In this paper are presented the Contact-Time (CT) based and Adaptive-Timer (AT) based Message Suppression (MS) methods for Vehicular DTN. For the CT-based MS method are used three DTN protocols for Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communication. For AT-based MS are used conventional Epidemic and two proposed Epidemic-based methods for V2V communication. We compare MS method, Message Suppression Controller (MSC) and MSC with Adaptive Threshold (MSC-ATh). The simulation results show that MSC-ATh performs better than other approaches. The storage consumption is improved when the number of vehicles increases and there is no reduction in PDR even if the message suppression is enabled. For Epidemic, when the number of Road-Side Units (RSUs) is 16, the results of PDR are the best compared with other DTN protocols. The MSC-ATh method is about 22% better than Epidemic for storage consumption. Also, the delay performance of MSC-ATh is improved by increasing the Suppression Coefficients (SCs) and number of vehicles.","PeriodicalId":54809,"journal":{"name":"Journal of High Speed Networks","volume":"1 1","pages":"57-73"},"PeriodicalIF":0.9,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81083158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}