首页 > 最新文献

Computer Communications最新文献

英文 中文
A certificateless designated verifier sanitizable signature in e-health intelligent mobile communication system 电子健康智能移动通信系统中的无证书指定验证器可消毒签名
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-11 DOI: 10.1016/j.comcom.2024.107935
Yonghua Zhan , Yang Yang , Bixia Yi , Renjie He , Rui Shi , Xianghan Zheng

With the widespread use of mobile communication and smart devices in the medical field, mobile healthcare has gained significant attention due to its ability to overcome geographical limitations and provide more efficient and high-quality medical services. In mobile healthcare, various instruments and wearable device data are collected, encrypted, and uploaded to the cloud, accessible to medical professionals, researchers, and insurance companies, among others. However, ensuring the security and privacy of healthcare data in the context of mobile networks has been a highly challenging issue. Certificateless signature schemes allow patients to conceal their respective privacy information for different sharing needs. Nevertheless, existing mobile healthcare data protection solutions suffer from costly certificate management and the inability to restrict signature verifiers. This paper proposes a certificateless designated verifier sanitizable signature for mobile healthcare scenarios, aiming to enhance the security and privacy of mobile healthcare data. This scheme enables the sanitization of sensitive data without the need for certificate management and allows for the specification of signature verifiers. This ensures the confidentiality of medical data, protects patient privacy, and prevents unauthorized access to healthcare data. Through security analysis and experimental comparisons, it is demonstrated that the proposed scheme is both efficient and effectively ensures data security and user privacy. Therefore, it is well-suited for privacy protection in mobile healthcare data.

随着移动通信和智能设备在医疗领域的广泛应用,移动医疗因其能够克服地域限制、提供更高效优质的医疗服务而备受关注。在移动医疗中,各种仪器和可穿戴设备的数据被收集、加密并上传到云端,供医疗专业人员、研究人员和保险公司等访问。然而,如何确保移动网络中医疗数据的安全性和隐私性一直是一个极具挑战性的问题。无证书签名方案允许患者隐藏各自的隐私信息,以满足不同的共享需求。然而,现有的移动医疗数据保护解决方案存在证书管理成本高昂、无法限制签名验证者等问题。本文提出了一种针对移动医疗场景的无证书指定验证器可消毒签名,旨在提高移动医疗数据的安全性和隐私性。该方案无需证书管理即可对敏感数据进行消毒,并允许指定签名验证器。这确保了医疗数据的保密性,保护了患者隐私,并防止了对医疗数据的未经授权的访问。通过安全分析和实验比较,证明所提出的方案既高效又能有效确保数据安全和用户隐私。因此,它非常适合移动医疗数据的隐私保护。
{"title":"A certificateless designated verifier sanitizable signature in e-health intelligent mobile communication system","authors":"Yonghua Zhan ,&nbsp;Yang Yang ,&nbsp;Bixia Yi ,&nbsp;Renjie He ,&nbsp;Rui Shi ,&nbsp;Xianghan Zheng","doi":"10.1016/j.comcom.2024.107935","DOIUrl":"10.1016/j.comcom.2024.107935","url":null,"abstract":"<div><p>With the widespread use of mobile communication and smart devices in the medical field, mobile healthcare has gained significant attention due to its ability to overcome geographical limitations and provide more efficient and high-quality medical services. In mobile healthcare, various instruments and wearable device data are collected, encrypted, and uploaded to the cloud, accessible to medical professionals, researchers, and insurance companies, among others. However, ensuring the security and privacy of healthcare data in the context of mobile networks has been a highly challenging issue. Certificateless signature schemes allow patients to conceal their respective privacy information for different sharing needs. Nevertheless, existing mobile healthcare data protection solutions suffer from costly certificate management and the inability to restrict signature verifiers. This paper proposes a certificateless designated verifier sanitizable signature for mobile healthcare scenarios, aiming to enhance the security and privacy of mobile healthcare data. This scheme enables the sanitization of sensitive data without the need for certificate management and allows for the specification of signature verifiers. This ensures the confidentiality of medical data, protects patient privacy, and prevents unauthorized access to healthcare data. Through security analysis and experimental comparisons, it is demonstrated that the proposed scheme is both efficient and effectively ensures data security and user privacy. Therefore, it is well-suited for privacy protection in mobile healthcare data.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107935"},"PeriodicalIF":4.5,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142239646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transfer learning-accelerated network slice management for next generation services 转移学习--加速下一代服务的网络切片管理
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-07 DOI: 10.1016/j.comcom.2024.107937
Sam Aleyadeh, Ibrahim Tamim, Abdallah Shami

The current trend in user services places an ever-growing demand for higher data rates, near-real-time latencies, and near-perfect quality of service. To meet such demands, fundamental changes were made to the traditional radio access network (RAN), introducing Open RAN (O-RAN). This new paradigm is based on a virtualized and intelligent RAN architecture. However, with the increased complexity of 5G applications, traditional application-specific placement techniques have reached a bottleneck. Our paper presents a Transfer Learning (TL) augmented Reinforcement Learning (RL) based networking slicing (NS) solution targeting more effective placement and improving downtime for prolonged slice deployments. To achieve this, we propose an approach based on creating a robust and dynamic repository of specialized RL agents and network slices geared towards popular user service types such as eMBB, URLLC, and mMTC. The proposed solution consists of a heuristic-controlled two-module-based ML Engine and a repository. The objective function is formulated to minimize the downtime incurred by the VNFs hosted on the commercial-off-the-shelf (COTS) servers. The performance of the proposed system is evaluated compared to traditional approaches using industry-standard 5G traffic datasets. The evaluation results show that the proposed solution consistently achieves lower downtime than the traditional algorithms.

当前的用户服务趋势对更高的数据传输速率、近乎实时的延迟和近乎完美的服务质量提出了越来越高的要求。为了满足这些需求,传统的无线接入网(RAN)发生了根本性的变化,引入了开放式无线接入网(O-RAN)。这种新模式基于虚拟化和智能化的 RAN 架构。然而,随着 5G 应用复杂性的增加,传统的特定于应用的放置技术遇到了瓶颈。我们的论文提出了一种基于迁移学习(TL)增强强化学习(RL)的网络切片(NS)解决方案,其目标是更有效地安置和改善长时间切片部署的停机时间。为实现这一目标,我们提出了一种方法,该方法的基础是创建一个稳健、动态的专门 RL 代理和网络切片资源库,该资源库面向 eMBB、URLLC 和 mMTC 等流行的用户服务类型。建议的解决方案由启发式控制的基于两个模块的多语言引擎和资源库组成。目标函数的制定是为了最大限度地减少商业现货(COTS)服务器上托管的 VNF 的停机时间。利用行业标准的 5G 流量数据集,对拟议系统的性能与传统方法进行了比较评估。评估结果表明,与传统算法相比,所提出的解决方案始终能实现更低的停机时间。
{"title":"Transfer learning-accelerated network slice management for next generation services","authors":"Sam Aleyadeh,&nbsp;Ibrahim Tamim,&nbsp;Abdallah Shami","doi":"10.1016/j.comcom.2024.107937","DOIUrl":"10.1016/j.comcom.2024.107937","url":null,"abstract":"<div><p>The current trend in user services places an ever-growing demand for higher data rates, near-real-time latencies, and near-perfect quality of service. To meet such demands, fundamental changes were made to the traditional radio access network (RAN), introducing Open RAN (O-RAN). This new paradigm is based on a virtualized and intelligent RAN architecture. However, with the increased complexity of 5G applications, traditional application-specific placement techniques have reached a bottleneck. Our paper presents a Transfer Learning (TL) augmented Reinforcement Learning (RL) based networking slicing (NS) solution targeting more effective placement and improving downtime for prolonged slice deployments. To achieve this, we propose an approach based on creating a robust and dynamic repository of specialized RL agents and network slices geared towards popular user service types such as eMBB, URLLC, and mMTC. The proposed solution consists of a heuristic-controlled two-module-based ML Engine and a repository. The objective function is formulated to minimize the downtime incurred by the VNFs hosted on the commercial-off-the-shelf (COTS) servers. The performance of the proposed system is evaluated compared to traditional approaches using industry-standard 5G traffic datasets. The evaluation results show that the proposed solution consistently achieves lower downtime than the traditional algorithms.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107937"},"PeriodicalIF":4.5,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0140366424002846/pdfft?md5=7a05cd348598b5d1f0ffd799ed601eb5&pid=1-s2.0-S0140366424002846-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting and mitigating cyber threats through data mining and machine learning 通过数据挖掘和机器学习预测和减轻网络威胁
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-06 DOI: 10.1016/j.comcom.2024.107949
Nusrat Samia , Sajal Saha , Anwar Haque

With cyber threats evolving alongside technological progress, strengthening network resilience to combat security vulnerabilities is crucial. This research extends cyber-crime analysis with an innovative approach, utilizing data mining and machine learning to not only predict cyber incidents but also reinforce network robustness. We introduce a real-time data collection framework to provide up-to-date cyberattack data, addressing current research limitations. By analyzing collected attack data, we identified temporal correlations in attack volumes across consecutive time frames. Our predictive model, developed using advanced machine learning and deep learning techniques, forecasts the frequency of cyber-attacks within specific time windows, demonstrating over a 15% improvement in accuracy compared to conventional baseline models. The methodologies employed include the use of Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN) for capturing complex patterns in time series data, and the integration of a sliding window technique to transform raw data into a format suitable for supervised learning. Our experiments evaluated the performance of various models, including ARIMA, Random Forest, Support Vector Regression, and K-Nearest Neighbors Regression, across multiple scenarios. Furthermore, we developed a Power BI platform for visualizing global cyber-attack trends, providing valuable insights for enhancing cybersecurity defences. Our research demonstrates that cyber incidents are not entirely random, and advanced AI tools can significantly enhance cybersecurity defences by analyzing patterns and trends from previous instances. This comprehensive approach not only improves prediction accuracy but also offers a robust framework for reducing the risk and impact of future cyber-crimes through enhanced detection and prediction capabilities.

随着技术的进步,网络威胁也在不断演变,因此加强网络复原力以应对安全漏洞至关重要。本研究采用创新方法扩展了网络犯罪分析,利用数据挖掘和机器学习不仅能预测网络事件,还能加强网络的稳健性。我们引入了一个实时数据收集框架,以提供最新的网络攻击数据,解决当前研究的局限性。通过分析收集到的攻击数据,我们确定了连续时间段内攻击量的时间相关性。我们的预测模型是利用先进的机器学习和深度学习技术开发的,可预测特定时间窗口内的网络攻击频率,与传统基线模型相比,准确率提高了 15%。所采用的方法包括使用循环神经网络(RNN)和卷积神经网络(CNN)捕捉时间序列数据中的复杂模式,以及整合滑动窗口技术将原始数据转换为适合监督学习的格式。我们的实验评估了 ARIMA、随机森林、支持向量回归和 K-Nearest Neighbors 回归等各种模型在多种情况下的性能。此外,我们还开发了一个 Power BI 平台,用于可视化全球网络攻击趋势,为加强网络安全防御提供有价值的见解。我们的研究表明,网络事件并非完全随机,先进的人工智能工具可以通过分析以往事件的模式和趋势,显著增强网络安全防御能力。这种综合方法不仅能提高预测准确性,还能提供一个强大的框架,通过增强检测和预测能力来降低未来网络犯罪的风险和影响。
{"title":"Predicting and mitigating cyber threats through data mining and machine learning","authors":"Nusrat Samia ,&nbsp;Sajal Saha ,&nbsp;Anwar Haque","doi":"10.1016/j.comcom.2024.107949","DOIUrl":"10.1016/j.comcom.2024.107949","url":null,"abstract":"<div><p>With cyber threats evolving alongside technological progress, strengthening network resilience to combat security vulnerabilities is crucial. This research extends cyber-crime analysis with an innovative approach, utilizing data mining and machine learning to not only predict cyber incidents but also reinforce network robustness. We introduce a real-time data collection framework to provide up-to-date cyberattack data, addressing current research limitations. By analyzing collected attack data, we identified temporal correlations in attack volumes across consecutive time frames. Our predictive model, developed using advanced machine learning and deep learning techniques, forecasts the frequency of cyber-attacks within specific time windows, demonstrating over a 15% improvement in accuracy compared to conventional baseline models. The methodologies employed include the use of Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN) for capturing complex patterns in time series data, and the integration of a sliding window technique to transform raw data into a format suitable for supervised learning. Our experiments evaluated the performance of various models, including ARIMA, Random Forest, Support Vector Regression, and K-Nearest Neighbors Regression, across multiple scenarios. Furthermore, we developed a Power BI platform for visualizing global cyber-attack trends, providing valuable insights for enhancing cybersecurity defences. Our research demonstrates that cyber incidents are not entirely random, and advanced AI tools can significantly enhance cybersecurity defences by analyzing patterns and trends from previous instances. This comprehensive approach not only improves prediction accuracy but also offers a robust framework for reducing the risk and impact of future cyber-crimes through enhanced detection and prediction capabilities.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107949"},"PeriodicalIF":4.5,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0140366424002962/pdfft?md5=120f2fc09cd6cbe01db3a435ba36943a&pid=1-s2.0-S0140366424002962-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel OTFS-based directional transmission scheme for airborne networks with ISAC technology 基于 OTFS 的新型定向传输方案,用于采用 ISAC 技术的机载网络
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-05 DOI: 10.1016/j.comcom.2024.107941
Xinyu Hong, Na Lv, Xiang Wang, Zhiyuan You

This paper investigates the directional transmission scheme for airborne networks (ANs) with orthogonal time frequency space (OTFS) modulation, to cope with the degradations of communication performance due to the aircraft’s uncertain and high-speed mobilities. Besides showing better communication performance in high mobility scenarios such as AN, OTFS has also been verified for realizing the integrated sensing and communication (ISAC) systems. In this case, this paper proposes a sensing-assisted beam prediction method by exploiting echoes to predict the next locations of moving aircraft, solving the beam rendezvous problem at the transmitter. Besides, for the data detection problem at the receiver, this paper proposes a novel pilot placement scheme relying on the predicted delays and Dopplers, realizing accurate channel estimation with lower overheads. Simulation results show that the proposed OTFS-based directional transmission scheme can achieve reliable communication performance with a low bit error rate.

本文研究了采用正交时频空间(OTFS)调制的机载网络(ANs)定向传输方案,以应对飞机不确定的高速移动所导致的通信性能下降。除了在 AN 等高移动性场景中显示出更好的通信性能外,OTFS 在实现集成传感和通信(ISAC)系统方面也得到了验证。在这种情况下,本文提出了一种传感辅助波束预测方法,利用回波预测移动飞机的下一个位置,解决发射机的波束会合问题。此外,针对接收器的数据检测问题,本文提出了一种新颖的先导放置方案,该方案依赖于预测的延迟和多普勒,以较低的开销实现了精确的信道估计。仿真结果表明,所提出的基于 OTFS 的定向传输方案能以较低的误码率实现可靠的通信性能。
{"title":"A novel OTFS-based directional transmission scheme for airborne networks with ISAC technology","authors":"Xinyu Hong,&nbsp;Na Lv,&nbsp;Xiang Wang,&nbsp;Zhiyuan You","doi":"10.1016/j.comcom.2024.107941","DOIUrl":"10.1016/j.comcom.2024.107941","url":null,"abstract":"<div><p>This paper investigates the directional transmission scheme for airborne networks (ANs) with orthogonal time frequency space (OTFS) modulation, to cope with the degradations of communication performance due to the aircraft’s uncertain and high-speed mobilities. Besides showing better communication performance in high mobility scenarios such as AN, OTFS has also been verified for realizing the integrated sensing and communication (ISAC) systems. In this case, this paper proposes a sensing-assisted beam prediction method by exploiting echoes to predict the next locations of moving aircraft, solving the beam rendezvous problem at the transmitter. Besides, for the data detection problem at the receiver, this paper proposes a novel pilot placement scheme relying on the predicted delays and Dopplers, realizing accurate channel estimation with lower overheads. Simulation results show that the proposed OTFS-based directional transmission scheme can achieve reliable communication performance with a low bit error rate.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107941"},"PeriodicalIF":4.5,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142161499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a novel service broker policy for choosing the appropriate data center in cloud environments 为在云环境中选择合适的数据中心制定新颖的服务代理政策
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-05 DOI: 10.1016/j.comcom.2024.107939
Lin Shan , Li Sun , Amin Rezaeipanah

Providing cloud computing services leads to quick access of users to dynamic and distributed resources. Increasing demand has created challenges such as resource availability, privacy, and security to provide efficient services in cloud computing. Cloud environments contain various computing resources, and allocating a suitable node to process a request can improve the quality of service on a large scale. Load balancing is one of the strategies to improve service quality and resource utilization, which refers to the distribution of load among different nodes in a distributed system. The cloud application service broker is responsible for load balancing by choosing the appropriate geo-distributed datacenter to process the requests of each end user. Parameters such as transmission delay, network delay, processing time, number of servers, workload, and service cost can be considered to select a suitable datacenter in close proximity. To reduce the adverse effects of choosing a datacenter by a service broker, this paper presents Rank-based Load Balancing in Geo-Distributed datacenters (RLBGD) as an effective service broker strategy in cloud environments. RLBGD uses a weighted combination of several criteria such as processing time, number of servers, workload, processing speed, service cost, and response time for dynamic ranking and determining the appropriate datacenter. CloudAnalyst tool is used to simulate and analyze the performance of the proposed method. The results of experiments show the effectiveness of RLBGD in terms of metrics such as service cost and processing time in different scenarios.

提供云计算服务可使用户快速访问动态和分布式资源。与日俱增的需求带来了资源可用性、隐私和安全等挑战,以便在云计算中提供高效服务。云环境包含各种计算资源,分配合适的节点处理请求可以大规模提高服务质量。负载均衡是提高服务质量和资源利用率的策略之一,是指在分布式系统中不同节点之间分配负载。云应用服务代理负责负载平衡,选择合适的地理分布数据中心来处理每个终端用户的请求。可以考虑传输延迟、网络延迟、处理时间、服务器数量、工作量和服务成本等参数,以就近选择合适的数据中心。为了减少服务代理在选择数据中心时产生的不利影响,本文提出了基于地理分布数据中心排名的负载平衡(RLBGD),作为云环境中一种有效的服务代理策略。RLBGD 使用处理时间、服务器数量、工作量、处理速度、服务成本和响应时间等多个标准的加权组合进行动态排名,并确定合适的数据中心。CloudAnalyst 工具用于模拟和分析拟议方法的性能。实验结果表明,RLBGD 在不同场景下的服务成本和处理时间等指标方面都很有效。
{"title":"Towards a novel service broker policy for choosing the appropriate data center in cloud environments","authors":"Lin Shan ,&nbsp;Li Sun ,&nbsp;Amin Rezaeipanah","doi":"10.1016/j.comcom.2024.107939","DOIUrl":"10.1016/j.comcom.2024.107939","url":null,"abstract":"<div><p>Providing cloud computing services leads to quick access of users to dynamic and distributed resources. Increasing demand has created challenges such as resource availability, privacy, and security to provide efficient services in cloud computing. Cloud environments contain various computing resources, and allocating a suitable node to process a request can improve the quality of service on a large scale. Load balancing is one of the strategies to improve service quality and resource utilization, which refers to the distribution of load among different nodes in a distributed system. The cloud application service broker is responsible for load balancing by choosing the appropriate geo-distributed datacenter to process the requests of each end user. Parameters such as transmission delay, network delay, processing time, number of servers, workload, and service cost can be considered to select a suitable datacenter in close proximity. To reduce the adverse effects of choosing a datacenter by a service broker, this paper presents Rank-based Load Balancing in Geo-Distributed datacenters (RLBGD) as an effective service broker strategy in cloud environments. RLBGD uses a weighted combination of several criteria such as processing time, number of servers, workload, processing speed, service cost, and response time for dynamic ranking and determining the appropriate datacenter. CloudAnalyst tool is used to simulate and analyze the performance of the proposed method. The results of experiments show the effectiveness of RLBGD in terms of metrics such as service cost and processing time in different scenarios.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107939"},"PeriodicalIF":4.5,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142149308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing MRAI on large scale BGP networks: An emulation-based approach 在大规模 BGP 网络上优化 MRAI:基于仿真的方法
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-04 DOI: 10.1016/j.comcom.2024.107940
Mattia Milani , Michele Segata , Luca Baldesi , Marco Nesler , Renato Lo Cigno , Leonardo Maccari
Modifying protocols that pertain to global Internet control is extremely challenging, because experimentation is almost impossible and both analytic and simulation models are not detailed and accurate enough to guarantee that changes will not affect negatively the Internet. Federated testbeds like the ones offered by the Fed4FIRE+ project offer a different solution: off-line Internet-scale experiments with thousands of Autonomous Systems (ASs). This work exploits Fed4FIRE+ for a large-scale experimental analysis of Border Gateway Protocol (BGP) convergence time under different hypotheses of Minimum Route Advertisement Interval (MRAI) setting, including an original proposal to improve its management by dynamically setting MRAI based on the topological position of the ASs in relation to the specific route being advertised with the UPDATE messages. MRAI is a timer that regulates the frequency of successive UPDATE messages sent by a BGP router to a specific peer for a given destination. Its large default value significantly slows down convergence after path changes, but its uncoordinated reduction can trigger storms of UPDATE messages, and set off unstable behaviors known as route flapping. The work is based on standard-compliant modifications of the BIRD BGP daemon and shows the tradeoffs between convergence time and signaling overhead with different management techniques.
修改涉及全球互联网控制的协议极具挑战性,因为实验几乎是不可能的,而且分析和模拟模型都不够详细和准确,无法保证修改不会对互联网产生负面影响。像 Fed4FIRE+ 项目提供的联合测试平台提供了一种不同的解决方案:由数千个自治系统(AS)进行离线互联网规模的实验。这项工作利用 Fed4FIRE+,对边界网关协议(BGP)在不同的最小路由广告间隔(MRAI)设置假设下的收敛时间进行了大规模实验分析,其中包括一项原创建议,即根据 AS 与 UPDATE 消息中广告的特定路由相关的拓扑位置动态设置 MRAI,从而改进路由广告间隔的管理。MRAI 是一个计时器,用于调节 BGP 路由器针对给定目的地向特定对等节点连续发送 UPDATE 消息的频率。它的默认值很大,会大大减缓路径变化后的收敛速度,但不协调地减少它可能会引发 UPDATE 消息风暴,并引发不稳定行为,即路由拍击(route flapping)。这项工作基于 BIRD BGP 守护进程的标准兼容修改,并展示了不同管理技术在收敛时间和信令开销之间的权衡。
{"title":"Optimizing MRAI on large scale BGP networks: An emulation-based approach","authors":"Mattia Milani ,&nbsp;Michele Segata ,&nbsp;Luca Baldesi ,&nbsp;Marco Nesler ,&nbsp;Renato Lo Cigno ,&nbsp;Leonardo Maccari","doi":"10.1016/j.comcom.2024.107940","DOIUrl":"10.1016/j.comcom.2024.107940","url":null,"abstract":"<div><div>Modifying protocols that pertain to global Internet control is extremely challenging, because experimentation is almost impossible and both analytic and simulation models are not detailed and accurate enough to guarantee that changes will not affect negatively the Internet. Federated testbeds like the ones offered by the Fed4FIRE+ project offer a different solution: off-line Internet-scale experiments with thousands of Autonomous Systems (ASs). This work exploits Fed4FIRE+ for a large-scale experimental analysis of Border Gateway Protocol (BGP) convergence time under different hypotheses of Minimum Route Advertisement Interval (MRAI) setting, including an original proposal to improve its management by dynamically setting MRAI based on the topological position of the ASs in relation to the specific route being advertised with the <span>UPDATE</span> messages. MRAI is a timer that regulates the frequency of successive <span>UPDATE</span> messages sent by a BGP router to a specific peer for a given destination. Its large default value significantly slows down convergence after path changes, but its uncoordinated reduction can trigger storms of <span>UPDATE</span> messages, and set off unstable behaviors known as route flapping. The work is based on standard-compliant modifications of the BIRD BGP daemon and shows the tradeoffs between convergence time and signaling overhead with different management techniques.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107940"},"PeriodicalIF":4.5,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142315688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust speech command recognition in challenging industrial environments 在具有挑战性的工业环境中进行可靠的语音命令识别
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-02 DOI: 10.1016/j.comcom.2024.107938
Stefano Bini, Vincenzo Carletti, Alessia Saggese, Mario Vento

Speech is among the main forms of communication between humans and robots in industrial settings, being the most natural way for a human worker to issue commands. However, the presence of pervasive and loud environmental noise poses significant challenges to the adoption of Speech-Command Recognition systems onboard manufacturing robots; indeed, they are expected to perform in real time on hardware with limited computational capabilities and also to be robust and accurate in such complex environments. In this paper, we propose an innovative system based on an End-to-End architecture with a Conformer backbone. Our system is specifically designed to achieve high accuracy in noisy industrial environments and to guarantee a minimal computational burden to meet stringent real-time requirements while running on computing devices that are embedded in robots. In order to increase the generalization capability of the system, the training procedure is driven by a Curriculum Learning strategy combined with dynamic data augmentation techniques, that progressively increase the complexity of input samples by increasing the noise during the training phase. We have conducted extensive experimentation to assess the effectiveness of our system, using a dataset composed of more than 50,000 samples, of which about 2,000 have been acquired during the daily operations of a Stellantis Italian factory. The results confirm the suitability of the proposed approach to be adopted in a real industrial environment; indeed, it is able to achieve, on both English and Italian commands, an accuracy higher than 90%, maintaining a compact model size (the network is 1.81 MB) and running in real-time on an industrial embedded device (namely 41ms over an NVIDIA Xavier NX).

在工业环境中,语音是人类与机器人交流的主要形式之一,也是人类工人发出指令的最自然方式。然而,无处不在的嘈杂环境噪声给制造机器人上的语音命令识别系统的应用带来了巨大挑战;事实上,人们期望这些系统能在计算能力有限的硬件上实时运行,并在如此复杂的环境中保持稳定和准确。在本文中,我们提出了一种基于端到端架构和 Conformer 骨干的创新系统。我们的系统专门设计用于在嘈杂的工业环境中实现高精度,并保证在机器人嵌入式计算设备上运行时,计算负担最小,以满足严格的实时性要求。为了提高系统的泛化能力,训练过程由课程学习策略与动态数据增强技术相结合,通过在训练阶段增加噪声来逐步提高输入样本的复杂性。我们使用由 50,000 多个样本组成的数据集进行了广泛的实验,以评估系统的有效性,其中约 2,000 个样本是在意大利 Stellantis 工厂的日常运营中获取的。实验结果证实,所提出的方法适合在实际工业环境中使用;事实上,该系统对英语和意大利语命令的准确率均高于 90%,模型大小小巧(网络大小为 1.81 MB),可在工业嵌入式设备上实时运行(在英伟达 Xavier NX 上的运行时间为 41 毫秒)。
{"title":"Robust speech command recognition in challenging industrial environments","authors":"Stefano Bini,&nbsp;Vincenzo Carletti,&nbsp;Alessia Saggese,&nbsp;Mario Vento","doi":"10.1016/j.comcom.2024.107938","DOIUrl":"10.1016/j.comcom.2024.107938","url":null,"abstract":"<div><p>Speech is among the main forms of communication between humans and robots in industrial settings, being the most natural way for a human worker to issue commands. However, the presence of pervasive and loud environmental noise poses significant challenges to the adoption of Speech-Command Recognition systems onboard manufacturing robots; indeed, they are expected to perform in real time on hardware with limited computational capabilities and also to be robust and accurate in such complex environments. In this paper, we propose an innovative system based on an End-to-End architecture with a Conformer backbone. Our system is specifically designed to achieve high accuracy in noisy industrial environments and to guarantee a minimal computational burden to meet stringent real-time requirements while running on computing devices that are embedded in robots. In order to increase the generalization capability of the system, the training procedure is driven by a Curriculum Learning strategy combined with dynamic data augmentation techniques, that progressively increase the complexity of input samples by increasing the noise during the training phase. We have conducted extensive experimentation to assess the effectiveness of our system, using a dataset composed of more than 50,000 samples, of which about 2,000 have been acquired during the daily operations of a Stellantis Italian factory. The results confirm the suitability of the proposed approach to be adopted in a real industrial environment; indeed, it is able to achieve, on both English and Italian commands, an accuracy higher than 90%, maintaining a compact model size (the network is 1.81 <span><math><mrow><mi>M</mi><mi>B</mi></mrow></math></span>) and running in real-time on an industrial embedded device (namely <span><math><mrow><mn>41</mn><mspace></mspace><mi>ms</mi></mrow></math></span> over an NVIDIA Xavier NX).</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107938"},"PeriodicalIF":4.5,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Locally verifiable approximate multi-member quantum threshold aggregation digital signature scheme 局部可验证的近似多成员量子阈值聚合数字签名方案
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-30 DOI: 10.1016/j.comcom.2024.107934
Zixuan Lu, Qingshui Xue, Tianhao Zhang, Jiewei Cai, Jing Han, Yixun He, Yinhang Li

Locally verifiable aggregate signature primitives can reduce the complexity of aggregate signature verification by computing locally open algorithms to generate auxiliary parameters. However, the breakthrough results of quantum computers at this stage indicate that it will be possible for quantum computers to break through the security of traditional hardness-based aggregated signature schemes. In order to solve the above problems, this paper proposes for the first time a new locally verifiable class of multi-member quantum threshold aggregated digital signature scheme based on the property that the verification of quantum coset states is a projection on the trans-subspace. Combined with the idea of auxiliary parameter generation in traditional locally verifiable aggregated signatures, it makes the current stage of threshold quantum digital signatures realize the aggregated features, and reduces the complexity of the verification of aggregated signatures while realizing post-quantum security. In addition, the verification of the signature key (quantum state) of the signature members does not require measurement operations, and the generated signatures are classical, so the communication between the trusted third center (TC), the set of signature members, the classical digital signature verifier (CV), and the third-party trusted aggregation generator (TA) are all classical, simplifying the communication model. In the performance analysis we make this quantum aggregation signature scheme more flexible as well as less quantum state preparation compared to other schemes.

可本地验证的聚合签名基元可以通过计算本地开放算法生成辅助参数,从而降低聚合签名验证的复杂性。然而,现阶段量子计算机的突破性成果表明,量子计算机将有可能突破传统基于硬度的聚合签名方案的安全性。为了解决上述问题,本文基于量子余元态的验证是在跨子空间上的投影这一特性,首次提出了一类新的局部可验证的多成员量子阈值聚合数字签名方案。结合传统局部可验证聚合签名中辅助参数生成的思想,使现阶段的阈值量子数字签名实现了聚合特征,在实现后量子安全的同时降低了聚合签名验证的复杂度。此外,验证签名成员的签名密钥(量子态)不需要测量操作,生成的签名是经典的,因此可信第三中心(TC)、签名成员集合、经典数字签名验证器(CV)和第三方可信聚合生成器(TA)之间的通信都是经典的,简化了通信模型。在性能分析中,与其他方案相比,我们使这种量子聚合签名方案更加灵活,同时减少了量子态准备。
{"title":"Locally verifiable approximate multi-member quantum threshold aggregation digital signature scheme","authors":"Zixuan Lu,&nbsp;Qingshui Xue,&nbsp;Tianhao Zhang,&nbsp;Jiewei Cai,&nbsp;Jing Han,&nbsp;Yixun He,&nbsp;Yinhang Li","doi":"10.1016/j.comcom.2024.107934","DOIUrl":"10.1016/j.comcom.2024.107934","url":null,"abstract":"<div><p>Locally verifiable aggregate signature primitives can reduce the complexity of aggregate signature verification by computing locally open algorithms to generate auxiliary parameters. However, the breakthrough results of quantum computers at this stage indicate that it will be possible for quantum computers to break through the security of traditional hardness-based aggregated signature schemes. In order to solve the above problems, this paper proposes for the first time a new locally verifiable class of multi-member quantum threshold aggregated digital signature scheme based on the property that the verification of quantum coset states is a projection on the trans-subspace. Combined with the idea of auxiliary parameter generation in traditional locally verifiable aggregated signatures, it makes the current stage of threshold quantum digital signatures realize the aggregated features, and reduces the complexity of the verification of aggregated signatures while realizing post-quantum security. In addition, the verification of the signature key (quantum state) of the signature members does not require measurement operations, and the generated signatures are classical, so the communication between the trusted third center (TC), the set of signature members, the classical digital signature verifier (CV), and the third-party trusted aggregation generator (TA) are all classical, simplifying the communication model. In the performance analysis we make this quantum aggregation signature scheme more flexible as well as less quantum state preparation compared to other schemes.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107934"},"PeriodicalIF":4.5,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative adversarial imitation learning assisted virtual network embedding algorithm for space-air-ground integrated network 生成式对抗模仿学习辅助虚拟网络嵌入算法的空-空-地一体化网络
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-30 DOI: 10.1016/j.comcom.2024.107936
Peiying Zhang , Ziyu Xu , Neeraj Kumar , Jian Wang , Lizhuang Tan , Ahmad Almogren

The space-air-ground integrated network (SAGIN) comprises a multitude of interconnected and integrated heterogeneous networks. Its network is large in scale, complex in structure, and highly dynamic. Virtual network embedding (VNE) is designed to efficiently allocate resources within the physical host to diverse virtual network requests (VNRs) with different constraints while improving the acceptance ratio of VNRs. However, in a heterogeneous SAGIN environment, improving the utilization of network resources while ensuring the performance of the VNE algorithm is a very challenging topic. To address the aforementioned issues, we first introduce a services diversion strategy (SDS) to select embedded nodes based on different service types and network state, thereby alleviating the uneven use of resources in different network domains. Subsequently, we propose a VNE algorithm (GAIL-VNE) based on generative adversarial imitation learning (GAIL). We construct a generator network based on the actor-critic architecture, which can generate the probability of physical nodes being embedded based on the observed network state. Secondly, we construct a discriminator network to distinguish between generator samples and expert samples, which aids in updating the generator network. After offline training, the generator and discriminator reach a Nash equilibrium through game confrontation. During the embedding process of VNRs, the output of the generator provides an effective basis for generating VNE solutions. Finally, we verify the effectiveness of this method through experiments involving offline training and online embedding.

天-空-地一体化网络(SAGIN)由众多相互连接和集成的异构网络组成。其网络规模大、结构复杂、动态性强。虚拟网络嵌入(VNE)旨在将物理主机内的资源有效地分配给具有不同约束条件的各种虚拟网络请求(VNR),同时提高虚拟网络请求的接受率。然而,在异构 SAGIN 环境中,如何在提高网络资源利用率的同时确保 VNE 算法的性能是一个极具挑战性的课题。针对上述问题,我们首先引入了服务分流策略(SDS),根据不同的服务类型和网络状态选择嵌入节点,从而缓解不同网络域资源使用不均衡的问题。随后,我们提出了一种基于生成式对抗模仿学习(GAIL)的 VNE 算法(GAIL-VNE)。我们构建了一个基于行为批判架构的生成器网络,它可以根据观察到的网络状态生成物理节点被嵌入的概率。其次,我们构建了一个鉴别器网络,用于区分生成器样本和专家样本,从而帮助更新生成器网络。经过离线训练后,生成器和鉴别器通过博弈对抗达到纳什均衡。在 VNR 的嵌入过程中,生成器的输出为生成 VNE 解决方案提供了有效的基础。最后,我们通过离线训练和在线嵌入实验验证了该方法的有效性。
{"title":"Generative adversarial imitation learning assisted virtual network embedding algorithm for space-air-ground integrated network","authors":"Peiying Zhang ,&nbsp;Ziyu Xu ,&nbsp;Neeraj Kumar ,&nbsp;Jian Wang ,&nbsp;Lizhuang Tan ,&nbsp;Ahmad Almogren","doi":"10.1016/j.comcom.2024.107936","DOIUrl":"10.1016/j.comcom.2024.107936","url":null,"abstract":"<div><p>The space-air-ground integrated network (SAGIN) comprises a multitude of interconnected and integrated heterogeneous networks. Its network is large in scale, complex in structure, and highly dynamic. Virtual network embedding (VNE) is designed to efficiently allocate resources within the physical host to diverse virtual network requests (VNRs) with different constraints while improving the acceptance ratio of VNRs. However, in a heterogeneous SAGIN environment, improving the utilization of network resources while ensuring the performance of the VNE algorithm is a very challenging topic. To address the aforementioned issues, we first introduce a services diversion strategy (SDS) to select embedded nodes based on different service types and network state, thereby alleviating the uneven use of resources in different network domains. Subsequently, we propose a VNE algorithm (GAIL-VNE) based on generative adversarial imitation learning (GAIL). We construct a generator network based on the actor-critic architecture, which can generate the probability of physical nodes being embedded based on the observed network state. Secondly, we construct a discriminator network to distinguish between generator samples and expert samples, which aids in updating the generator network. After offline training, the generator and discriminator reach a Nash equilibrium through game confrontation. During the embedding process of VNRs, the output of the generator provides an effective basis for generating VNE solutions. Finally, we verify the effectiveness of this method through experiments involving offline training and online embedding.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107936"},"PeriodicalIF":4.5,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142149309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cooperative edge-caching based transmission with minimum effective delay in heterogeneous cellular networks 异构蜂窝网络中基于边缘缓存的合作传输与最小有效延迟
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-28 DOI: 10.1016/j.comcom.2024.107928
Jiachao Yu, Chao Zhai, Hao Dai, Lina Zheng, Yujun Li

In heterogeneous cellular networks (HCNs), neighboring users often request similar contents asynchronously. Based on the content popularity, base stations (BSs) can download and cache contents when the network is idle, and transmit them locally when the network is busy, which can effectively reduce the backhaul burden and the transmission delay. We consider a two-tier HCN, where macro base stations (MBSs) and small base stations (SBSs) can cooperatively and probabilistically cache contents. Each user is associated to the BS with the maximum average received signal power in any tier. With the cooperative content transfer between MBS tier and SBS tier, users can adaptively obtain contents from BSs or remote content servers. We properly model both wired and wireless delays when a user requests an arbitrary content, and propose the concept of effective delay. Content caching probabilities are optimized using the Marine Predators Algorithm via minimizing the average effective delay. Numerical results show that our proposed cooperative caching scheme achieves much shorter delays than the benchmark caching schemes.

在异构蜂窝网络(HCN)中,相邻用户经常异步请求类似的内容。基站(BS)可以根据内容的受欢迎程度,在网络空闲时下载并缓存内容,在网络繁忙时在本地传输内容,从而有效减少回程负担和传输延迟。我们考虑了两层 HCN,其中宏基站(MBS)和小基站(SBS)可以合作并以概率方式缓存内容。每个用户都与任意层中平均接收信号功率最大的基站相关联。通过 MBS 层和 SBS 层之间的合作内容传输,用户可以自适应地从 BS 或远程内容服务器获取内容。我们对用户请求任意内容时的有线和无线延迟进行了适当建模,并提出了有效延迟的概念。通过最小化平均有效延迟,使用海洋捕食者算法优化内容缓存概率。数值结果表明,我们提出的合作缓存方案比基准缓存方案实现了更短的延迟。
{"title":"Cooperative edge-caching based transmission with minimum effective delay in heterogeneous cellular networks","authors":"Jiachao Yu,&nbsp;Chao Zhai,&nbsp;Hao Dai,&nbsp;Lina Zheng,&nbsp;Yujun Li","doi":"10.1016/j.comcom.2024.107928","DOIUrl":"10.1016/j.comcom.2024.107928","url":null,"abstract":"<div><p>In heterogeneous cellular networks (HCNs), neighboring users often request similar contents asynchronously. Based on the content popularity, base stations (BSs) can download and cache contents when the network is idle, and transmit them locally when the network is busy, which can effectively reduce the backhaul burden and the transmission delay. We consider a two-tier HCN, where macro base stations (MBSs) and small base stations (SBSs) can cooperatively and probabilistically cache contents. Each user is associated to the BS with the maximum average received signal power in any tier. With the cooperative content transfer between MBS tier and SBS tier, users can adaptively obtain contents from BSs or remote content servers. We properly model both wired and wireless delays when a user requests an arbitrary content, and propose the concept of effective delay. Content caching probabilities are optimized using the Marine Predators Algorithm via minimizing the average effective delay. Numerical results show that our proposed cooperative caching scheme achieves much shorter delays than the benchmark caching schemes.</p></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107928"},"PeriodicalIF":4.5,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142129170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Communications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1