首页 > 最新文献

Computer Communications最新文献

英文 中文
aBBR: An augmented BBR for collaborative intelligent transmission over heterogeneous networks in IIoT aBBR:用于 IIoT 中异构网络协同智能传输的增强型 BBR
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-28 DOI: 10.1016/j.comcom.2024.107932
Shujie Yang , Kefei Song , Zhenhui Yuan , Lujie Zhong , Mu Wang , Xiang Ji , Changqiao Xu

In the era of Industry 5.0, with the deep convergence of Industrial Internet of Things (IIoT) and 5G technology, stable transmission of massive data in heterogeneous networks becomes crucial. This is not only the key to improving the efficiency of human–machine collaboration, but also the basis for ensuring system continuity and reliability. The arrival of 5G has brought new challenges to the communication of IIoT in heterogeneous environments. Due to the inherent characteristics of wireless networks, such as random packet loss and network jitter, traditional transmission control schemes often fail to achieve optimal performance. In this paper we propose a novel transmission control algorithm, aBBR. It is an augmented algorithm based on BBRv3. aBBR dynamically adjusts the sending window size through real-time analysis to enhance the transmission performance in heterogeneous networks. Simulation results show that, compared to traditional algorithms, aBBR demonstrates the best comprehensive performance in terms of throughput, latency, and retransmission. When random packet loss exists in the link, aBBR improves the throughput by an average of 29.3% and decreases the retransmission rate by 18.5% while keeping the transmission delay at the same level as BBRv3.

在工业 5.0 时代,随着工业物联网(IIoT)与 5G 技术的深度融合,海量数据在异构网络中的稳定传输变得至关重要。这不仅是提高人机协作效率的关键,也是确保系统连续性和可靠性的基础。5G 的到来给异构环境中的 IIoT 通信带来了新的挑战。由于无线网络的固有特性,如随机丢包和网络抖动,传统的传输控制方案往往无法达到最佳性能。在本文中,我们提出了一种新型传输控制算法--aBBR。aBBR 通过实时分析动态调整发送窗口大小,以提高异构网络中的传输性能。仿真结果表明,与传统算法相比,aBBR 在吞吐量、延迟和重传方面的综合性能最佳。当链路中存在随机丢包时,aBBR 的吞吐量平均提高了 29.3%,重传率降低了 18.5%,同时传输延迟与 BBRv3 保持在同一水平。
{"title":"aBBR: An augmented BBR for collaborative intelligent transmission over heterogeneous networks in IIoT","authors":"Shujie Yang ,&nbsp;Kefei Song ,&nbsp;Zhenhui Yuan ,&nbsp;Lujie Zhong ,&nbsp;Mu Wang ,&nbsp;Xiang Ji ,&nbsp;Changqiao Xu","doi":"10.1016/j.comcom.2024.107932","DOIUrl":"10.1016/j.comcom.2024.107932","url":null,"abstract":"<div><p>In the era of Industry 5.0, with the deep convergence of Industrial Internet of Things (IIoT) and 5G technology, stable transmission of massive data in heterogeneous networks becomes crucial. This is not only the key to improving the efficiency of human–machine collaboration, but also the basis for ensuring system continuity and reliability. The arrival of 5G has brought new challenges to the communication of IIoT in heterogeneous environments. Due to the inherent characteristics of wireless networks, such as random packet loss and network jitter, traditional transmission control schemes often fail to achieve optimal performance. In this paper we propose a novel transmission control algorithm, aBBR. It is an augmented algorithm based on BBRv3. aBBR dynamically adjusts the sending window size through real-time analysis to enhance the transmission performance in heterogeneous networks. Simulation results show that, compared to traditional algorithms, aBBR demonstrates the best comprehensive performance in terms of throughput, latency, and retransmission. When random packet loss exists in the link, aBBR improves the throughput by an average of 29.3% and decreases the retransmission rate by 18.5% while keeping the transmission delay at the same level as BBRv3.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"226 ","pages":"Article 107932"},"PeriodicalIF":4.5,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on maximizing real demand response based on link addition in social networks 基于社交网络链接添加的真实需求响应最大化研究
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-27 DOI: 10.1016/j.comcom.2024.107933
Yuxin Gao , Jianming Zhu , Peikun Ni

The impact of social networks on real-life scenarios is intensifying with the diversification of the information they disseminate. Consequently, the interconnection between social networks and tangible networks is strengthening. Notably, we have observed that messages disseminated on social networks, particularly those soliciting aid, exert a significant influence on the underlying network structure. This study aims to investigate the role and importance of social networks in the information dissemination process, as well as to construct a linear threshold model tailored for the dissemination of emergency information across both real and social networks, leveraging conventional models of information spread. We have developed a model to increase the number of connection edges in social networks in order to enhance their worth. Additionally, we discovered that the objective function possesses submodular features and thus the created problem is NP-hard. As a result, we can use algorithms with approximative assurances of 1e1θ to solve our problem and ensures the accuracy of the solution. We also analyze the complexity of the algorithm in solving this problem. Finally we validated our conclusions with three publicly available datasets and one real data set to analysis the results of the solution.

随着社交网络传播信息的多样化,社交网络对现实生活的影响也在不断加强。因此,社交网络与有形网络之间的相互联系正在加强。值得注意的是,我们发现社交网络上传播的信息,尤其是那些寻求援助的信息,对底层网络结构产生了重大影响。本研究旨在研究社交网络在信息传播过程中的作用和重要性,并利用传统的信息传播模型,构建一个线性阈值模型,用于在真实网络和社交网络中传播紧急信息。我们开发了一种增加社交网络连接边数量的模型,以提高社交网络的价值。此外,我们还发现目标函数具有亚模态特征,因此所创建的问题具有 NP 难度。因此,我们可以使用近似保证为 1-e-1-θ′ 的算法来解决我们的问题,并确保解决方案的准确性。我们还分析了解决这个问题的算法的复杂性。最后,我们用三个公开数据集和一个真实数据集验证了我们的结论,分析了解决方案的结果。
{"title":"Research on maximizing real demand response based on link addition in social networks","authors":"Yuxin Gao ,&nbsp;Jianming Zhu ,&nbsp;Peikun Ni","doi":"10.1016/j.comcom.2024.107933","DOIUrl":"10.1016/j.comcom.2024.107933","url":null,"abstract":"<div><p>The impact of social networks on real-life scenarios is intensifying with the diversification of the information they disseminate. Consequently, the interconnection between social networks and tangible networks is strengthening. Notably, we have observed that messages disseminated on social networks, particularly those soliciting aid, exert a significant influence on the underlying network structure. This study aims to investigate the role and importance of social networks in the information dissemination process, as well as to construct a linear threshold model tailored for the dissemination of emergency information across both real and social networks, leveraging conventional models of information spread. We have developed a model to increase the number of connection edges in social networks in order to enhance their worth. Additionally, we discovered that the objective function possesses submodular features and thus the created problem is NP-hard. As a result, we can use algorithms with approximative assurances of <span><math><mrow><mn>1</mn><mo>−</mo><msup><mrow><mi>e</mi></mrow><mrow><mo>−</mo><mn>1</mn></mrow></msup><mo>−</mo><msup><mrow><mi>θ</mi></mrow><mrow><mo>′</mo></mrow></msup></mrow></math></span> to solve our problem and ensures the accuracy of the solution. We also analyze the complexity of the algorithm in solving this problem. Finally we validated our conclusions with three publicly available datasets and one real data set to analysis the results of the solution.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107933"},"PeriodicalIF":4.5,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142129171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
5G configured grant scheduling for seamless integration with TSN industrial networks 5G 配置的授予调度可与 TSN 工业网络无缝集成
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-26 DOI: 10.1016/j.comcom.2024.107930
Ana Larrañaga-Zumeta , M. Carmen Lucas-Estañ , Javier Gozálvez , Aitor Arriola

The integration of 5G (5th Generation) and TSN (Time Sensitive Networking) networks is key for the support of emerging Industry 4.0 applications, where the flexibility and adaptability of 5G will be combined with the deterministic communications features provided by TSN. For an effective and efficient 5G-TSN integration both networks need to be coordinated. However, 5G has not been designed to provide deterministic communications. In this context, this paper proposes a 5G configured grant scheduling scheme that coordinates its decision with the TSN scheduling to satisfy the deterministic and end-to-end latency requirements of industrial applications. The proposed scheme avoids scheduling conflicts that can happen when packets of different TSN flows are generated with different periodicities. The proposed scheme efficiently coordinates the access to the radio resources of different TSN flows and complies with the 3GPP (Third Generation Partnership Project) standard requirements.

5G(第五代)和 TSN(时间敏感网络)网络的集成是支持新兴工业 4.0 应用的关键,其中 5G 的灵活性和适应性将与 TSN 提供的确定性通信功能相结合。要实现有效和高效的 5G-TSN 整合,需要协调这两个网络。然而,5G 并非为提供确定性通信而设计。在此背景下,本文提出了一种 5G 配置授予调度方案,该方案将其决策与 TSN 调度相协调,以满足工业应用的确定性和端到端延迟要求。当不同 TSN 流量的数据包以不同周期生成时,所提出的方案避免了可能发生的调度冲突。拟议方案有效协调了不同 TSN 流量对无线电资源的访问,符合 3GPP(第三代合作伙伴项目)标准要求。
{"title":"5G configured grant scheduling for seamless integration with TSN industrial networks","authors":"Ana Larrañaga-Zumeta ,&nbsp;M. Carmen Lucas-Estañ ,&nbsp;Javier Gozálvez ,&nbsp;Aitor Arriola","doi":"10.1016/j.comcom.2024.107930","DOIUrl":"10.1016/j.comcom.2024.107930","url":null,"abstract":"<div><p>The integration of 5G (5th Generation) and TSN (Time Sensitive Networking) networks is key for the support of emerging Industry 4.0 applications, where the flexibility and adaptability of 5G will be combined with the deterministic communications features provided by TSN. For an effective and efficient 5G-TSN integration both networks need to be coordinated. However, 5G has not been designed to provide deterministic communications. In this context, this paper proposes a 5G configured grant scheduling scheme that coordinates its decision with the TSN scheduling to satisfy the deterministic and end-to-end latency requirements of industrial applications. The proposed scheme avoids scheduling conflicts that can happen when packets of different TSN flows are generated with different periodicities. The proposed scheme efficiently coordinates the access to the radio resources of different TSN flows and complies with the 3GPP (Third Generation Partnership Project) standard requirements.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"226 ","pages":"Article 107930"},"PeriodicalIF":4.5,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DTL-5G: Deep transfer learning-based DDoS attack detection in 5G and beyond networks DTL-5G:5G 及更高网络中基于深度迁移学习的 DDoS 攻击检测
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-24 DOI: 10.1016/j.comcom.2024.107927
Behnam Farzaneh , Nashid Shahriar , Abu Hena Al Muktadir , Md. Shamim Towhid , Mohammad Sadegh Khosravani

Network slicing is considered as a key enabler for 5G and beyond mobile networks for supporting a variety of new services, including enhanced mobile broadband, ultra-reliable and low-latency communication, and massive connectivity, on the same physical infrastructure. However, this technology increases the susceptibility of networks to cyber threats, particularly Distributed Denial-of-Service (DDoS) attacks. These attacks have the potential to cause service quality degradation by overloading network function(s) that are central to network slices to operate seamlessly. This calls for an Intrusion Detection System (IDS) as a shield against a wide array of DDoS attacks. In this regard, one promising solution would be the use of Deep Learning (DL) models for detecting possible DDoS attacks, an approach that has already made its way into the field given its manifest effectiveness. However, one particular challenge with DL models is that they require large volumes of labeled data for efficient training, which are not readily available in operational networks. A possible workaround is to resort to Transfer Learning (TL) approaches that can utilize the knowledge learned from prior training to a target domain with limited labeled data. This paper investigates how Deep Transfer Learning (DTL) based approaches can improve the detection of DDoS attacks in 5G networks by leveraging DL models, such as Bidirectional Long Short-Term Memory (BiLSTM), Convolutional Neural Network (CNN), Residual Network (ResNet), and Inception as base models. A comprehensive dataset generated in our 5G network slicing testbed serves as the source dataset for DTL, which includes both benign and different types of DDoS attack traffic. After learning features, patterns, and representations from the source dataset using initial training, we fine-tune base models using a variety of TL processes on a target DDoS attack dataset. The 5G-NIDD dataset, which has a sparse amount of annotated traffic pertaining to several DDoS attack generated in a real 5G network, is chosen as the target dataset. The results show that the proposed DTL models have performance improvements in detecting different types of DDoS attacks in 5G-NIDD dataset compared to the case when no TL is applied. According to the results, the BiLSTM and Inception models being identified as the top-performing models. BiLSTM indicates an improvement of 13.90%, 21.48%, and 12.22% in terms of accuracy, recall, and F1-score, respectively, whereas, Inception demonstrates an enhancement of 10.09% in terms of precision, compared to the models that do not adopt TL.

网络切片被认为是 5G 及以后移动网络的关键推动因素,可在相同的物理基础设施上支持各种新服务,包括增强型移动宽带、超可靠和低延迟通信以及大规模连接。然而,这种技术增加了网络对网络威胁的敏感性,特别是分布式拒绝服务(DDoS)攻击。这些攻击有可能使网络功能超负荷,从而导致服务质量下降,而这些功能对于网络切片的无缝运行至关重要。这就需要入侵检测系统(IDS)来抵御各种 DDoS 攻击。在这方面,一个很有前景的解决方案是使用深度学习(DL)模型来检测可能的 DDoS 攻击。然而,深度学习模型面临的一个特殊挑战是,它们需要大量标注数据来进行有效训练,而这在运营网络中并不容易获得。一种可能的变通方法是采用迁移学习(TL)方法,这种方法可以利用从先前训练中学到的知识,将其应用到标注数据有限的目标领域。本文研究了基于深度迁移学习(DTL)的方法如何利用双向长短期记忆(BiLSTM)、卷积神经网络(CNN)、残差网络(ResNet)和 Inception 等 DL 模型作为基础模型,改进 5G 网络中 DDoS 攻击的检测。我们的 5G 网络切片测试平台生成的综合数据集是 DTL 的源数据集,其中包括良性和不同类型的 DDoS 攻击流量。通过初始训练从源数据集中学习特征、模式和表示之后,我们在目标 DDoS 攻击数据集上使用各种 TL 流程对基础模型进行微调。我们选择了 5G-NIDD 数据集作为目标数据集,该数据集拥有与真实 5G 网络中产生的若干 DDoS 攻击相关的稀疏注释流量。结果表明,与未应用 TL 的情况相比,所提出的 DTL 模型在 5G-NIDD 数据集中检测不同类型的 DDoS 攻击时性能有所提高。结果表明,BiLSTM 和 Inception 模型表现最佳。与未采用 TL 的模型相比,BiLSTM 在准确率、召回率和 F1 分数方面分别提高了 13.90%、21.48% 和 12.22%,而 Inception 在精度方面提高了 10.09%。
{"title":"DTL-5G: Deep transfer learning-based DDoS attack detection in 5G and beyond networks","authors":"Behnam Farzaneh ,&nbsp;Nashid Shahriar ,&nbsp;Abu Hena Al Muktadir ,&nbsp;Md. Shamim Towhid ,&nbsp;Mohammad Sadegh Khosravani","doi":"10.1016/j.comcom.2024.107927","DOIUrl":"10.1016/j.comcom.2024.107927","url":null,"abstract":"<div><p>Network slicing is considered as a key enabler for 5G and beyond mobile networks for supporting a variety of new services, including enhanced mobile broadband, ultra-reliable and low-latency communication, and massive connectivity, on the same physical infrastructure. However, this technology increases the susceptibility of networks to cyber threats, particularly Distributed Denial-of-Service (DDoS) attacks. These attacks have the potential to cause service quality degradation by overloading network function(s) that are central to network slices to operate seamlessly. This calls for an Intrusion Detection System (IDS) as a shield against a wide array of DDoS attacks. In this regard, one promising solution would be the use of Deep Learning (DL) models for detecting possible DDoS attacks, an approach that has already made its way into the field given its manifest effectiveness. However, one particular challenge with DL models is that they require large volumes of labeled data for efficient training, which are not readily available in operational networks. A possible workaround is to resort to Transfer Learning (TL) approaches that can utilize the knowledge learned from prior training to a target domain with limited labeled data. This paper investigates how Deep Transfer Learning (DTL) based approaches can improve the detection of DDoS attacks in 5G networks by leveraging DL models, such as Bidirectional Long Short-Term Memory (BiLSTM), Convolutional Neural Network (CNN), Residual Network (ResNet), and Inception as base models. A comprehensive dataset generated in our 5G network slicing testbed serves as the source dataset for DTL, which includes both benign and different types of DDoS attack traffic. After learning features, patterns, and representations from the source dataset using initial training, we fine-tune base models using a variety of TL processes on a target DDoS attack dataset. The 5G-NIDD dataset, which has a sparse amount of annotated traffic pertaining to several DDoS attack generated in a real 5G network, is chosen as the target dataset. The results show that the proposed DTL models have performance improvements in detecting different types of DDoS attacks in 5G-NIDD dataset compared to the case when no TL is applied. According to the results, the BiLSTM and Inception models being identified as the top-performing models. BiLSTM indicates an improvement of 13.90%, 21.48%, and 12.22% in terms of accuracy, recall, and F1-score, respectively, whereas, Inception demonstrates an enhancement of 10.09% in terms of precision, compared to the models that do not adopt TL.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107927"},"PeriodicalIF":4.5,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142129047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Noncoherent multiuser massive SIMO with mixed differential and index modulation 采用混合差分和指数调制的非相干多用户大规模 SIMO
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-24 DOI: 10.1016/j.comcom.2024.107931
Xiangchuan Gao , Yancong Li , Zheng Dong , Xingwang Li

This paper proposes a new mixed differential and index modulation framework for noncoherent multiuser massive single-input multiple-output (SIMO) systems. While differential modulation and detection is a popular noncoherent scheme, its constellation collisions limit the resulting error performance. To address this issue, we introduce a user with binary index modulation (IM) among the differential users, achieving much reduced collisions. We then analyze a three-user SIMO system with binary modulations, attained a closed-form bit error rate (BER) expression with a fast noncoherent maximum-likelihood (ML) detection algorithm for each user. Furthermore, a closed-form optimal power loading vector is derived by minimizing the worst-case BER under individual power constraints. Finally, an efficient one-dimensional bisection search algorithm is employed to optimize constellations for arbitrary numbers of differential users and constellation sizes by minimizing the system BER. Simulation results validate the theoretical analysis and demonstrate the superiority of the proposed scheme compared to existing differential schemes.

本文为非相干多用户大规模单输入多输出(SIMO)系统提出了一种新的混合差分和索引调制框架。虽然差分调制和检测是一种流行的非相干方案,但其星座碰撞限制了由此产生的误差性能。为了解决这个问题,我们在差分用户中引入了一个二进制索引调制(IM)用户,从而大大减少了碰撞。然后,我们分析了采用二进制调制的三用户 SIMO 系统,为每个用户采用快速非相干最大似然 (ML) 检测算法,获得了闭式误码率 (BER) 表达式。此外,通过最小化单个功率约束下的最坏误码率,得出了闭式最优功率负载向量。最后,通过最小化系统误码率,采用高效的一维分段搜索算法来优化任意差分用户数和星座大小的星座。仿真结果验证了理论分析,并证明了与现有差分方案相比,拟议方案的优越性。
{"title":"Noncoherent multiuser massive SIMO with mixed differential and index modulation","authors":"Xiangchuan Gao ,&nbsp;Yancong Li ,&nbsp;Zheng Dong ,&nbsp;Xingwang Li","doi":"10.1016/j.comcom.2024.107931","DOIUrl":"10.1016/j.comcom.2024.107931","url":null,"abstract":"<div><p>This paper proposes a new mixed differential and index modulation framework for noncoherent multiuser massive single-input multiple-output (SIMO) systems. While differential modulation and detection is a popular noncoherent scheme, its constellation collisions limit the resulting error performance. To address this issue, we introduce a user with binary index modulation (IM) among the differential users, achieving much reduced collisions. We then analyze a three-user SIMO system with binary modulations, attained a closed-form bit error rate (BER) expression with a fast noncoherent maximum-likelihood (ML) detection algorithm for each user. Furthermore, a closed-form optimal power loading vector is derived by minimizing the worst-case BER under individual power constraints. Finally, an efficient one-dimensional bisection search algorithm is employed to optimize constellations for arbitrary numbers of differential users and constellation sizes by minimizing the system BER. Simulation results validate the theoretical analysis and demonstrate the superiority of the proposed scheme compared to existing differential schemes.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107931"},"PeriodicalIF":4.5,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0140366424002780/pdfft?md5=04ca178949f6a116168982cd2b675a94&pid=1-s2.0-S0140366424002780-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142161739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure workflow scheduling algorithm utilizing hybrid optimization in mobile edge computing environments 移动边缘计算环境中利用混合优化的安全工作流调度算法
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-23 DOI: 10.1016/j.comcom.2024.107929
Dileep Kumar Sajnani, Xiaoping Li, Abdul Rasheed Mahesar

The rapid advancement of mobile communication technology and devices has greatly improved our way of life. It also presents a new possibility that data sources can be used to accomplish computing tasks at nearby locations. Mobile Edge Computing (MEC) is a computing model that provides computer resources specifically designed to handle mobile tasks. Nevertheless, there are certain obstacles that must be carefully tackled, specifically regarding the security and quality of services in the workflow scheduling over MEC. This research proposes a new method called Feedback Artificial Remora Optimization (FARO)-based workflow scheduling method to address the issues of scheduling processes with improved security in MEC. In this context, the fitness functions that are taken into account include multi-objective, such as CPU utilization, memory utilization, encryption cost, and execution time. These functions are used to enhance the scheduling of workflow tasks based on security considerations. The FARO algorithm is a combination of the Feedback Artificial Tree (FAT) and the Remora Optimization Algorithm (ROA). The experimental findings have demonstrated that the developed approach surpassed current methods by a large margin in terms of CPU use, memory consumption, encryption cost, and execution time, with values of 0.012, 0.010, 0.017, and 0.036, respectively.

移动通信技术和设备的飞速发展极大地改善了我们的生活方式。这也带来了一种新的可能性,即数据源可用于完成附近地点的计算任务。移动边缘计算(MEC)是一种计算模式,它提供专门用于处理移动任务的计算机资源。然而,有一些障碍必须认真解决,特别是在 MEC 上工作流调度的安全性和服务质量方面。本研究提出了一种新方法,即基于反馈人工雷莫拉优化(FARO)的工作流调度方法,以解决在 MEC 中提高安全性的流程调度问题。在这种情况下,考虑的适应度函数包括多目标,如 CPU 利用率、内存利用率、加密成本和执行时间。这些函数用于基于安全考虑因素加强工作流任务的调度。FARO 算法是反馈人工树(FAT)和 Remora 优化算法(ROA)的结合。实验结果表明,所开发的方法在 CPU 占用、内存消耗、加密成本和执行时间方面大大超过了现有方法,其值分别为 0.012、0.010、0.017 和 0.036。
{"title":"Secure workflow scheduling algorithm utilizing hybrid optimization in mobile edge computing environments","authors":"Dileep Kumar Sajnani,&nbsp;Xiaoping Li,&nbsp;Abdul Rasheed Mahesar","doi":"10.1016/j.comcom.2024.107929","DOIUrl":"10.1016/j.comcom.2024.107929","url":null,"abstract":"<div><p>The rapid advancement of mobile communication technology and devices has greatly improved our way of life. It also presents a new possibility that data sources can be used to accomplish computing tasks at nearby locations. Mobile Edge Computing (MEC) is a computing model that provides computer resources specifically designed to handle mobile tasks. Nevertheless, there are certain obstacles that must be carefully tackled, specifically regarding the security and quality of services in the workflow scheduling over MEC. This research proposes a new method called Feedback Artificial Remora Optimization (FARO)-based workflow scheduling method to address the issues of scheduling processes with improved security in MEC. In this context, the fitness functions that are taken into account include multi-objective, such as CPU utilization, memory utilization, encryption cost, and execution time. These functions are used to enhance the scheduling of workflow tasks based on security considerations. The FARO algorithm is a combination of the Feedback Artificial Tree (FAT) and the Remora Optimization Algorithm (ROA). The experimental findings have demonstrated that the developed approach surpassed current methods by a large margin in terms of CPU use, memory consumption, encryption cost, and execution time, with values of 0.012, 0.010, 0.017, and 0.036, respectively.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"226 ","pages":"Article 107929"},"PeriodicalIF":4.5,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RIS-NOMA communications over Nakagami-m fading with imperfect successive interference cancellation 具有不完美连续干扰消除功能的中上衰减 RIS-NOMA 通信
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-22 DOI: 10.1016/j.comcom.2024.107926
Jinyuan Gu , Mingxing Wang , Wei Duan , Lei Zhang , Huaiping Zhang

Considering imperfect successive interference cancellation (SIC) for non-orthogonal multiple access (NOMA) communications, this work studies the cooperative reconfigurable intelligent surface (RIS)- and relay-assisted system under Nakagami-m fading. We focus on the performance comparison for such cooperative schemes, under different channel conditions and system parameters. In addition, we analyze the minimum required RIS elements number of the RIS-assisted scheme to achieve the same performance of the relay-assisted scheme with given signal-to-noise ratio (SNR). The cases of the optimal continuous phase shift and discrete phase shift designs of RIS are also discussed. We make a comparison between them, aiming to study the impact of the residual phase errors on performance. Specially, we compare the active RIS and passive RIS with the same system power constraint. Simulation results demonstrating the reliability of the analysis, validate that the relay-assisted scheme is superior to that of RIS-assisted one when the RIS elements number is small and transmitted power is lower. The results also confirm that the deployment of RIS should consider the actual situation of the application scenario.

考虑到非正交多址(NOMA)通信中不完善的连续干扰消除(SIC),本研究对中上衰落条件下的可重构智能表面(RIS)和中继辅助合作系统进行了研究。我们重点研究了这种合作方案在不同信道条件和系统参数下的性能比较。此外,我们还分析了在给定信噪比(SNR)条件下,RIS 辅助方案要达到中继辅助方案的相同性能,所需的最小 RIS 元数。我们还讨论了 RIS 的最佳连续相移和离散相移设计的情况。我们对它们进行了比较,旨在研究残余相位误差对性能的影响。特别是,我们对具有相同系统功率约束的有源 RIS 和无源 RIS 进行了比较。仿真结果证明了分析的可靠性,并验证了当 RIS 元数较少且传输功率较低时,中继辅助方案优于 RIS 辅助方案。结果还证实,部署 RIS 时应考虑应用场景的实际情况。
{"title":"RIS-NOMA communications over Nakagami-m fading with imperfect successive interference cancellation","authors":"Jinyuan Gu ,&nbsp;Mingxing Wang ,&nbsp;Wei Duan ,&nbsp;Lei Zhang ,&nbsp;Huaiping Zhang","doi":"10.1016/j.comcom.2024.107926","DOIUrl":"10.1016/j.comcom.2024.107926","url":null,"abstract":"<div><p>Considering imperfect successive interference cancellation (SIC) for non-orthogonal multiple access (NOMA) communications, this work studies the cooperative reconfigurable intelligent surface (RIS)- and relay-assisted system under Nakagami-<em>m</em> fading. We focus on the performance comparison for such cooperative schemes, under different channel conditions and system parameters. In addition, we analyze the minimum required RIS elements number of the RIS-assisted scheme to achieve the same performance of the relay-assisted scheme with given signal-to-noise ratio (SNR). The cases of the optimal continuous phase shift and discrete phase shift designs of RIS are also discussed. We make a comparison between them, aiming to study the impact of the residual phase errors on performance. Specially, we compare the active RIS and passive RIS with the same system power constraint. Simulation results demonstrating the reliability of the analysis, validate that the relay-assisted scheme is superior to that of RIS-assisted one when the RIS elements number is small and transmitted power is lower. The results also confirm that the deployment of RIS should consider the actual situation of the application scenario.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"226 ","pages":"Article 107926"},"PeriodicalIF":4.5,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep reinforcement learning-based resource scheduling for energy optimization and load balancing in SDN-driven edge computing 基于深度强化学习的资源调度,在 SDN 驱动的边缘计算中实现能源优化和负载平衡
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-19 DOI: 10.1016/j.comcom.2024.107925
Xu Zhou , Jing Yang , Yijun Li , Shaobo Li , Zhidong Su

Traditional techniques for edge computing resource scheduling may result in large amounts of wasted server resources and energy consumption; thus, exploring new approaches to achieve higher resource and energy efficiency is a new challenge. Deep reinforcement learning (DRL) offers a promising solution by balancing resource utilization, latency, and energy optimization. However, current methods often focus solely on energy optimization for offloading and computing tasks, neglecting the impact of server numbers and resource operation status on energy efficiency and load balancing. On the other hand, prioritizing latency optimization may result in resource imbalance and increased energy waste. To address these challenges, we propose a novel energy optimization method coupled with a load balancing strategy. Our approach aims to minimize overall energy consumption and achieve server load balancing under latency constraints. This is achieved by controlling the number of active servers and individual server load states through a two stage DRL-based energy and resource optimization algorithm. Experimental results demonstrate that our scheme can save an average of 19.84% energy compared to mainstream reinforcement learning methods and 49.60% and 45.33% compared to Round Robin (RR) and random scheduling, respectively. Additionally, our method is optimized for reward value, load balancing, runtime, and anti-interference capability.

传统的边缘计算资源调度技术可能会导致大量服务器资源和能源消耗的浪费;因此,探索新方法以实现更高的资源和能源效率是一项新的挑战。深度强化学习(DRL)通过平衡资源利用率、延迟和能源优化,提供了一种前景广阔的解决方案。然而,目前的方法往往只关注卸载和计算任务的能源优化,而忽视了服务器数量和资源运行状态对能源效率和负载平衡的影响。另一方面,优先优化延迟可能会导致资源失衡,增加能源浪费。为应对这些挑战,我们提出了一种与负载平衡策略相结合的新型能源优化方法。我们的方法旨在最大限度地降低总体能耗,并在延迟限制条件下实现服务器负载平衡。这是通过基于 DRL 的两阶段能源和资源优化算法控制活动服务器数量和单个服务器负载状态来实现的。实验结果表明,与主流强化学习方法相比,我们的方案平均可节省 19.84% 的能源,与循环罗宾(RR)和随机调度相比,分别可节省 49.60% 和 45.33% 的能源。此外,我们的方法还优化了奖励值、负载平衡、运行时间和抗干扰能力。
{"title":"Deep reinforcement learning-based resource scheduling for energy optimization and load balancing in SDN-driven edge computing","authors":"Xu Zhou ,&nbsp;Jing Yang ,&nbsp;Yijun Li ,&nbsp;Shaobo Li ,&nbsp;Zhidong Su","doi":"10.1016/j.comcom.2024.107925","DOIUrl":"10.1016/j.comcom.2024.107925","url":null,"abstract":"<div><p>Traditional techniques for edge computing resource scheduling may result in large amounts of wasted server resources and energy consumption; thus, exploring new approaches to achieve higher resource and energy efficiency is a new challenge. Deep reinforcement learning (DRL) offers a promising solution by balancing resource utilization, latency, and energy optimization. However, current methods often focus solely on energy optimization for offloading and computing tasks, neglecting the impact of server numbers and resource operation status on energy efficiency and load balancing. On the other hand, prioritizing latency optimization may result in resource imbalance and increased energy waste. To address these challenges, we propose a novel energy optimization method coupled with a load balancing strategy. Our approach aims to minimize overall energy consumption and achieve server load balancing under latency constraints. This is achieved by controlling the number of active servers and individual server load states through a two stage DRL-based energy and resource optimization algorithm. Experimental results demonstrate that our scheme can save an average of 19.84% energy compared to mainstream reinforcement learning methods and 49.60% and 45.33% compared to Round Robin (RR) and random scheduling, respectively. Additionally, our method is optimized for reward value, load balancing, runtime, and anti-interference capability.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"226 ","pages":"Article 107925"},"PeriodicalIF":4.5,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142084289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FQ-SAT: A fuzzy Q-learning-based MPQUIC scheduler for data transmission optimization FQ-SAT:基于模糊 Q 学习的 MPQUIC 调度器,用于优化数据传输
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-12 DOI: 10.1016/j.comcom.2024.107924
Thanh Trung Nguyen , Minh Hai Vu , Thi Ha Ly Dinh , Thanh Hung Nguyen , Phi Le Nguyen , Kien Nguyen

In the 5G and beyond era, multipath transport protocols, including MPQUIC, are necessary in various use cases. In MPQUIC, one of the most critical issues is efficiently scheduling the upcoming transmission packets on several paths considering path dynamicity. To this end, this paper introduces FQ-SAT - a novel Fuzzy Q-learning-based MPQUIC scheduler for data transmission optimization, including download time, in heterogeneous wireless networks. Different from previous works, FQ-SAT combines Q-learning and Fuzzy logic in an MPQUIC scheduler to determine optimal transmission on heterogeneous paths. FQ-SAT leverages the self-learning ability of reinforcement learning (i.e., in a Q-learning model) to deal with heterogeneity. Moreover, FQ-SAT facilitates Fuzzy logic to dynamically adjust the proposed Q-learning model’s hyper-parameters along with the networks’ rapid changes. We evaluate FQ-SAT extensively in various scenarios in both simulated and actual networks. The results show that FQ-SAT reduces the single-file download time by 3.2%–13.5% in simulation and by 4.1%–13.8% in actual network, reduces the download time of all resources up to 20.4% in web browsing evaluation, and reaches percentage of on-time segments up to 97.5% in video streaming, compared to state-of-the-art MPQUIC schedulers.

在 5G 及以后的时代,包括 MPQUIC 在内的多路径传输协议在各种用例中都是必要的。在 MPQUIC 中,最关键的问题之一是在考虑路径动态性的情况下,在多条路径上有效调度即将到来的传输数据包。为此,本文介绍了 FQ-SAT--一种新颖的基于模糊 Q 学习的 MPQUIC 调度器,用于优化异构无线网络中的数据传输,包括下载时间。与以往的研究不同,FQ-SAT 在 MPQUIC 调度器中结合了 Q-learning 和模糊逻辑,以确定异构路径上的最佳传输方式。FQ-SAT 利用强化学习(即 Q-learning 模型)的自学能力来处理异构问题。此外,FQ-SAT 还利用模糊逻辑,随着网络的快速变化,动态调整所提出的 Q-learning 模型的超参数。我们在模拟和实际网络的各种场景中对 FQ-SAT 进行了广泛评估。结果表明,与最先进的 MPQUIC 调度器相比,FQ-SAT 在模拟网络中将单个文件的下载时间缩短了 3.2%-13.5%,在实际网络中将单个文件的下载时间缩短了 4.1%-13.8%,在网页浏览评估中将所有资源的下载时间缩短了 20.4%,在视频流中将分段准时率提高到 97.5%。
{"title":"FQ-SAT: A fuzzy Q-learning-based MPQUIC scheduler for data transmission optimization","authors":"Thanh Trung Nguyen ,&nbsp;Minh Hai Vu ,&nbsp;Thi Ha Ly Dinh ,&nbsp;Thanh Hung Nguyen ,&nbsp;Phi Le Nguyen ,&nbsp;Kien Nguyen","doi":"10.1016/j.comcom.2024.107924","DOIUrl":"10.1016/j.comcom.2024.107924","url":null,"abstract":"<div><p>In the 5G and beyond era, multipath transport protocols, including MPQUIC, are necessary in various use cases. In MPQUIC, one of the most critical issues is efficiently scheduling the upcoming transmission packets on several paths considering path dynamicity. To this end, this paper introduces FQ-SAT - a novel Fuzzy Q-learning-based MPQUIC scheduler for data transmission optimization, including download time, in heterogeneous wireless networks. Different from previous works, FQ-SAT combines Q-learning and Fuzzy logic in an MPQUIC scheduler to determine optimal transmission on heterogeneous paths. FQ-SAT leverages the self-learning ability of reinforcement learning (i.e., in a Q-learning model) to deal with heterogeneity. Moreover, FQ-SAT facilitates Fuzzy logic to dynamically adjust the proposed Q-learning model’s hyper-parameters along with the networks’ rapid changes. We evaluate FQ-SAT extensively in various scenarios in both simulated and actual networks. The results show that FQ-SAT reduces the single-file download time by 3.2%–13.5% in simulation and by 4.1%–13.8% in actual network, reduces the download time of all resources up to 20.4% in web browsing evaluation, and reaches percentage of on-time segments up to 97.5% in video streaming, compared to state-of-the-art MPQUIC schedulers.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"226 ","pages":"Article 107924"},"PeriodicalIF":4.5,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142001948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved dynamic Byzantine Fault Tolerant consensus mechanism 改进的动态拜占庭容错共识机制
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-08 DOI: 10.1016/j.comcom.2024.08.004
Fei Tang , Jinlan Peng , Ping Wang , Huihui Zhu , Tingxian Xu

The Byzantine Fault Tolerance (BFT) consensus protocols are widely used in consortium blockchain to ensure data consistency. However, BFT protocols are generally static which means that the dynamic joining and exiting of nodes will lead to the reconfiguration of the consortium blockchain system. Moreover, most BFT protocols cannot support the clearing operation of slow, crashed, or faulty nodes, which limits the application of consortium blockchain. In order to solve these problems, this paper proposes a new Dynamic Scalable BFT (D-SBFT) protocol. D-SBFT optimizes SBFT by using Distributed Key Generation (DKG) technology and BLS aggregate signature scheme. On the basis of SBFT, we add Join, Exit, and Clear algorithms. Among them, Join and Exit algorithms enable nodes to actively join and exit the consortium blockchain more flexibly. Clear can remove slow, crashed or faulty nodes from the consortium blockchain. Experimental results show that our D-SBFT protocol can efficiently implement node dynamic change while exhibiting good performance in consensus process.

拜占庭容错(BFT)共识协议被广泛应用于联盟区块链,以确保数据一致性。然而,BFT 协议通常是静态的,这意味着节点的动态加入和退出将导致联盟区块链系统的重新配置。此外,大多数 BFT 协议无法支持缓慢、崩溃或故障节点的清算操作,这限制了联盟区块链的应用。为了解决这些问题,本文提出了一种新的动态可扩展 BFT(D-SBFT)协议。D-SBFT 通过使用分布式密钥生成(DKG)技术和 BLS 聚合签名方案对 SBFT 进行了优化。在 SBFT 的基础上,我们增加了加入(Join)、退出(Exit)和清除(Clear)算法。其中,加入和退出算法能让节点更灵活地主动加入和退出联盟区块链。清除算法可以从联盟区块链中清除速度慢、崩溃或有故障的节点。实验结果表明,我们的 D-SBFT 协议可以有效地实现节点动态变化,同时在共识过程中表现出良好的性能。
{"title":"Improved dynamic Byzantine Fault Tolerant consensus mechanism","authors":"Fei Tang ,&nbsp;Jinlan Peng ,&nbsp;Ping Wang ,&nbsp;Huihui Zhu ,&nbsp;Tingxian Xu","doi":"10.1016/j.comcom.2024.08.004","DOIUrl":"10.1016/j.comcom.2024.08.004","url":null,"abstract":"<div><p>The Byzantine Fault Tolerance (BFT) consensus protocols are widely used in consortium blockchain to ensure data consistency. However, BFT protocols are generally static which means that the dynamic joining and exiting of nodes will lead to the reconfiguration of the consortium blockchain system. Moreover, most BFT protocols cannot support the clearing operation of slow, crashed, or faulty nodes, which limits the application of consortium blockchain. In order to solve these problems, this paper proposes a new Dynamic Scalable BFT (D-SBFT) protocol. D-SBFT optimizes SBFT by using Distributed Key Generation (DKG) technology and BLS aggregate signature scheme. On the basis of SBFT, we add <em>Join</em>, <em>Exit</em>, and <em>Clear</em> algorithms. Among them, <em>Join</em> and <em>Exit</em> algorithms enable nodes to actively join and exit the consortium blockchain more flexibly. <em>Clear</em> can remove slow, crashed or faulty nodes from the consortium blockchain. Experimental results show that our D-SBFT protocol can efficiently implement node dynamic change while exhibiting good performance in consensus process.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"226 ","pages":"Article 107922"},"PeriodicalIF":4.5,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142020471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Communications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1