首页 > 最新文献

Future Generation Computer Systems-The International Journal of Escience最新文献

英文 中文
QM-ARC: QoS-aware Multi-tier Adaptive Cache Replacement Strategy QM-ARC:QoS 感知多层自适应缓存替换策略
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-10-03 DOI: 10.1016/j.future.2024.107548
Lydia Ait-Oucheggou , Stéphane Rubini , Abdella Battou , Jalil Boukhobza
Distributed data-centric systems, such as Named Data Networking, utilize in-network caching to reduce application latency by buffering relevant data in high-speed memory. However, the significant increase in data traffic makes expanding memory capacity prohibitively expensive. To address this challenge, integrating technologies like non-volatile memory and high-speed solid-state drives with dynamic random-access memory can form a cost-effective multi-tier cache system. Additionally, most existing caching policies focus on categorizing data based on recency and frequency, overlooking the varying Quality-of-Service (QoS) requirements of applications and customers—a concept supported by Service Level Agreements in various service delivery models, particularly in Cloud computing. One of the most prominent algorithms in caching policy literature is the Adaptive Replacement Cache (ARC), that uses recency and frequency lists but does not account for QoS. In this paper, we propose a QoS-aware Multi-tier Adaptive Replacement Cache (QM-ARC) policy. QM-ARC extends ARC by incorporating QoS-based priorities between data applications and customers using a penalty concept borrowed from service-level management practices. QM-ARC is generic, applicable to any number of cache tiers, and can accommodate various penalty functions. Furthermore, we introduce a complementary feature for QM-ARC that employs Q-learning to dynamically adjust the sizes of the two ARC lists. Our solution, evaluated using both synthetic and real-world traces, demonstrates significant improvements in QoS compared to state-of-the-art methods by better considering priority levels. Results show that QM-ARC reduces penalties by up to 45% and increases the hit rate for high priority data by up to 84%, without negatively impacting the overall hit rate, which also increases by up to 61%.
以数据为中心的分布式系统(如命名数据网络)利用网络内缓存,通过在高速内存中缓冲相关数据来减少应用延迟。然而,数据流量的大幅增加使得扩大内存容量的成本过高。为了应对这一挑战,将非易失性内存和高速固态硬盘等技术与动态随机存取内存整合在一起,可以形成一个经济高效的多层缓存系统。此外,现有的大多数高速缓存策略都侧重于根据频繁程度和频率对数据进行分类,忽略了应用程序和客户对服务质量(QoS)的不同要求--在各种服务交付模式中,特别是在云计算中,服务水平协议支持这一概念。缓存策略文献中最著名的算法之一是自适应替换缓存(ARC),该算法使用周期和频率列表,但不考虑服务质量。在本文中,我们提出了一种服务质量感知多层自适应替换缓存(QM-ARC)策略。QM-ARC 扩展了 ARC,利用从服务级管理实践中借鉴的惩罚概念,在数据应用和客户之间纳入了基于 QoS 的优先级。QM-ARC 具有通用性,适用于任意数量的缓存层,并能适应各种惩罚函数。此外,我们还为 QM-ARC 引入了一项补充功能,即利用 Q-learning 来动态调整两个 ARC 列表的大小。我们的解决方案使用合成和实际跟踪进行评估,与最先进的方法相比,通过更好地考虑优先级,在 QoS 方面取得了显著改善。结果表明,QM-ARC 减少了高达 45% 的惩罚,并将高优先级数据的命中率提高了高达 84%,而不会对总体命中率产生负面影响,总体命中率也提高了高达 61%。
{"title":"QM-ARC: QoS-aware Multi-tier Adaptive Cache Replacement Strategy","authors":"Lydia Ait-Oucheggou ,&nbsp;Stéphane Rubini ,&nbsp;Abdella Battou ,&nbsp;Jalil Boukhobza","doi":"10.1016/j.future.2024.107548","DOIUrl":"10.1016/j.future.2024.107548","url":null,"abstract":"<div><div>Distributed data-centric systems, such as Named Data Networking, utilize in-network caching to reduce application latency by buffering relevant data in high-speed memory. However, the significant increase in data traffic makes expanding memory capacity prohibitively expensive. To address this challenge, integrating technologies like non-volatile memory and high-speed solid-state drives with dynamic random-access memory can form a cost-effective multi-tier cache system. Additionally, most existing caching policies focus on categorizing data based on recency and frequency, overlooking the varying Quality-of-Service (QoS) requirements of applications and customers—a concept supported by Service Level Agreements in various service delivery models, particularly in Cloud computing. One of the most prominent algorithms in caching policy literature is the Adaptive Replacement Cache (ARC), that uses recency and frequency lists but does not account for QoS. In this paper, we propose a QoS-aware Multi-tier Adaptive Replacement Cache (QM-ARC) policy. QM-ARC extends ARC by incorporating QoS-based priorities between data applications and customers using a penalty concept borrowed from service-level management practices. QM-ARC is generic, applicable to any number of cache tiers, and can accommodate various penalty functions. Furthermore, we introduce a complementary feature for QM-ARC that employs Q-learning to dynamically adjust the sizes of the two ARC lists. Our solution, evaluated using both synthetic and real-world traces, demonstrates significant improvements in QoS compared to state-of-the-art methods by better considering priority levels. Results show that QM-ARC reduces penalties by up to 45% and increases the hit rate for high priority data by up to 84%, without negatively impacting the overall hit rate, which also increases by up to 61%.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"163 ","pages":"Article 107548"},"PeriodicalIF":6.2,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142445687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An intelligent native network slicing security architecture empowered by federated learning 由联合学习赋能的智能本地网络切片安全架构
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-10-02 DOI: 10.1016/j.future.2024.107537
Rodrigo Moreira , Rodolfo S. Villaça , Moisés R.N. Ribeiro , Joberto S.B. Martins , João Henrique Corrêa , Tereza C. Carvalho , Flávio de Oliveira Silva
Network Slicing (NS) has transformed the landscape of resource sharing in networks, offering flexibility to support services and applications with highly variable requirements in areas such as the next-generation 5G/6G mobile networks (NGMN), vehicular networks, industrial Internet of Things (IoT), and verticals. Although significant research and experimentation have driven the development of network slicing, existing architectures often fall short in intrinsic architectural intelligent security capabilities. This paper proposes an architecture-intelligent security mechanism to improve the NS solutions. We idealized a security-native architecture that deploys intelligent microservices as federated agents based on machine learning, providing intra-slice and architectural operation security for the Slicing Future Internet Infrastructures (SFI2) reference architecture. It is noteworthy that federated-learning approaches match the highly distributed modern microservice-based architectures, thus providing a unifying and scalable design choice for NS platforms addressing both service and security. Using ML-Agents and Security Agents, our approach identified Distributed Denial-of-Service (DDoS) and intrusion attacks within the slice using generic and non-intrusive telemetry records, achieving an average accuracy of approximately 95.60% in the network slicing architecture and 99.99% for the deployed slice – intra-slice. This result demonstrates the potential for leveraging architectural operational security and introduces a promising new research direction for network slicing architectures.
网络切片(NS)改变了网络资源共享的格局,为支持下一代 5G/6G 移动网络(NGMN)、车载网络、工业物联网(IoT)和垂直行业等领域需求千变万化的服务和应用提供了灵活性。尽管大量的研究和实验推动了网络切片的发展,但现有的架构往往缺乏内在的架构智能安全能力。本文提出了一种架构智能安全机制,以改进 NS 解决方案。我们理想化了一种安全原生架构,将智能微服务部署为基于机器学习的联盟代理,为未来互联网基础设施切片(SFI2)参考架构提供片内和架构操作安全。值得注意的是,联合学习方法与基于微服务的高度分布式现代架构相匹配,从而为解决服务和安全问题的 NS 平台提供了统一且可扩展的设计选择。通过使用 ML 代理和安全代理,我们的方法利用通用和非侵入式遥测记录识别了切片内的分布式拒绝服务(DDoS)和入侵攻击,在网络切片架构中实现了约 95.60% 的平均准确率,在部署的切片-切片内实现了 99.99% 的平均准确率。这一成果展示了利用架构运行安全的潜力,并为网络切片架构引入了一个前景广阔的新研究方向。
{"title":"An intelligent native network slicing security architecture empowered by federated learning","authors":"Rodrigo Moreira ,&nbsp;Rodolfo S. Villaça ,&nbsp;Moisés R.N. Ribeiro ,&nbsp;Joberto S.B. Martins ,&nbsp;João Henrique Corrêa ,&nbsp;Tereza C. Carvalho ,&nbsp;Flávio de Oliveira Silva","doi":"10.1016/j.future.2024.107537","DOIUrl":"10.1016/j.future.2024.107537","url":null,"abstract":"<div><div>Network Slicing (NS) has transformed the landscape of resource sharing in networks, offering flexibility to support services and applications with highly variable requirements in areas such as the next-generation 5G/6G mobile networks (NGMN), vehicular networks, industrial Internet of Things (IoT), and verticals. Although significant research and experimentation have driven the development of network slicing, existing architectures often fall short in intrinsic architectural intelligent security capabilities. This paper proposes an architecture-intelligent security mechanism to improve the NS solutions. We idealized a security-native architecture that deploys intelligent microservices as federated agents based on machine learning, providing intra-slice and architectural operation security for the Slicing Future Internet Infrastructures (SFI2) reference architecture. It is noteworthy that federated-learning approaches match the highly distributed modern microservice-based architectures, thus providing a unifying and scalable design choice for NS platforms addressing both service and security. Using ML-Agents and Security Agents, our approach identified Distributed Denial-of-Service (DDoS) and intrusion attacks within the slice using generic and non-intrusive telemetry records, achieving an average accuracy of approximately 95.60% in the network slicing architecture and 99.99% for the deployed slice – intra-slice. This result demonstrates the potential for leveraging architectural operational security and introduces a promising new research direction for network slicing architectures.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"163 ","pages":"Article 107537"},"PeriodicalIF":6.2,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guest Editorial for the Special Issue on Federated Learning on the Edge: Challenges and Future Directions 为 "边缘联合学习:挑战与未来方向 "特刊撰写客座社论
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-10-01 DOI: 10.1016/j.future.2024.107546
Francesco Piccialli , Antonella Guzzo , David Camacho
{"title":"Guest Editorial for the Special Issue on Federated Learning on the Edge: Challenges and Future Directions","authors":"Francesco Piccialli ,&nbsp;Antonella Guzzo ,&nbsp;David Camacho","doi":"10.1016/j.future.2024.107546","DOIUrl":"10.1016/j.future.2024.107546","url":null,"abstract":"","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"163 ","pages":"Article 107546"},"PeriodicalIF":6.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PPAT: An effective scheme ensuring privacy-preserving, accuracy, and trust for worker selection in mobile crowdsensing networks PPAT:一种有效的方案,可确保移动群感网络中工人选择的隐私保护、准确性和信任度
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-27 DOI: 10.1016/j.future.2024.107536
Qianxue Guo , Yasha He , Qian Li , Anfeng Liu , Neal N. Xiong , Qian He , Qiang Yang , Shaobo Zhang
The data content privacy protection and data accuracy are two important research issues in Mobile Crowdsensing (MCS). However, current researches have rarely been able to satisfy privacy protection as well as data accuracy at the same time, thus hindering the development of MCS. To solve the above issues, for the first time, we have proposed a Privacy Preserving, Accuracy, and Trust data collection scheme (PPAT) for MCS, which can protect the privacy of data content and maintain high accuracy at low-cost style. In PPAT scheme, First, we proposed a scrambled data privacy protection framework which can protect the data of each worker from being known to any third party, which can protect the data privacy of workers. The second, more importantly, we propose a truth value estimation method based on trust computing, which can obtain the truth value more accurately compared to the classic methods under privacy-preserving. In the proposed trust-based truth value calculation, the worker's trust is determined by comparing it with the weight of the trusted worker. Then, the truth value is calculated by the trust of the workers, so that the truth value obtained is more accurate. Through theoretical analysis, it is proved that the proposed PPAT scheme has good worker data content, worker trust, and truth value content privacy protection. Through a large number of simulation experiments, the strategy proposed in this paper has a good ability to protect data content privacy compared to the previous strategy, while improving data quality by 0.5%∼5.7%, and reducing data collection costs by 35.6%∼54.9%.
数据内容隐私保护和数据准确性是移动人群感应(MCS)的两个重要研究课题。然而,目前的研究很少能同时满足隐私保护和数据准确性的要求,从而阻碍了 MCS 的发展。为了解决上述问题,我们首次提出了一种适用于 MCS 的 "隐私保护、准确性和信任 "数据收集方案(PPAT),它能以低成本的方式保护数据内容的隐私并保持高准确性。在 PPAT 方案中,首先,我们提出了一个加扰数据隐私保护框架,可以保护每个工人的数据不被任何第三方知晓,从而保护工人的数据隐私。其次,更重要的是,我们提出了一种基于信任计算的真值估算方法,与隐私保护下的经典方法相比,它能更准确地得到真值。在所提出的基于信任的真相值计算方法中,工人的信任度是通过与受信任工人的权重进行比较来确定的。然后,根据工人的信任度计算出真相值,这样得到的真相值就更加准确了。通过理论分析,证明了所提出的 PPAT 方案具有良好的工人数据内容、工人信任和真值内容隐私保护能力。通过大量仿真实验,本文提出的策略与之前的策略相比,具有良好的数据内容隐私保护能力,同时数据质量提高了 0.5%∼5.7%,数据采集成本降低了 35.6%∼54.9%。
{"title":"PPAT: An effective scheme ensuring privacy-preserving, accuracy, and trust for worker selection in mobile crowdsensing networks","authors":"Qianxue Guo ,&nbsp;Yasha He ,&nbsp;Qian Li ,&nbsp;Anfeng Liu ,&nbsp;Neal N. Xiong ,&nbsp;Qian He ,&nbsp;Qiang Yang ,&nbsp;Shaobo Zhang","doi":"10.1016/j.future.2024.107536","DOIUrl":"10.1016/j.future.2024.107536","url":null,"abstract":"<div><div>The data content privacy protection and data accuracy are two important research issues in Mobile Crowdsensing (MCS). However, current researches have rarely been able to satisfy privacy protection as well as data accuracy at the same time, thus hindering the development of MCS. To solve the above issues, for the first time, we have proposed a Privacy Preserving, Accuracy, and Trust data collection scheme (PPAT) for MCS, which can protect the privacy of data content and maintain high accuracy at low-cost style. In PPAT scheme, First, we proposed a scrambled data privacy protection framework which can protect the data of each worker from being known to any third party, which can protect the data privacy of workers. The second, more importantly, we propose a truth value estimation method based on trust computing, which can obtain the truth value more accurately compared to the classic methods under privacy-preserving. In the proposed trust-based truth value calculation, the worker's trust is determined by comparing it with the weight of the trusted worker. Then, the truth value is calculated by the trust of the workers, so that the truth value obtained is more accurate. Through theoretical analysis, it is proved that the proposed PPAT scheme has good worker data content, worker trust, and truth value content privacy protection. Through a large number of simulation experiments, the strategy proposed in this paper has a good ability to protect data content privacy compared to the previous strategy, while improving data quality by 0.5%∼5.7%, and reducing data collection costs by 35.6%∼54.9%.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"163 ","pages":"Article 107536"},"PeriodicalIF":6.2,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142445511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A priority-aware dynamic scheduling algorithm for ensuring data freshness in 5G networks 确保 5G 网络数据新鲜度的优先级感知动态调度算法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-26 DOI: 10.1016/j.future.2024.107542
Beom-Su Kim
To ensure the freshness of information in wireless communication systems, a new performance metric named the age of information (AoI) is being adopted in the design of transmission schedulers. However, most AoI schedulers rely on iterative optimization methods, which struggle to adapt to real-time changes, particularly in real-world 5G deployment scenarios, where network conditions are highly dynamic. In addition, they neglect the impact of consecutive AoI deadline violations, which result in prolonged information deficits. To address these limitations, we present a 5G scheduler that can cope with dynamic network conditions, with the aim of minimizing the long-term average AoI under deadline constraints. Specifically, we consider a dense urban massive machine-type communication (mMTC) scenario in which numerous Internet of Things (IoT) devices frequently join or leave the network under time-varying channel conditions. To facilitate real-time adaptation, we develop a per-slot scheduling method that makes locally optimal decisions for each slot without requiring extensive iterations. In addition, we combine the per-slot scheduling method with a priority-rule scheduling algorithm to satisfy the stringent timing requirements of 5G. The simulation results show that the proposed scheduler reduces the average AoI by 10%, deadline violation rate by 40%, and consecutive violation rate by 20% approximately compared with other AoI schedulers.
为了确保无线通信系统中信息的新鲜度,一种名为信息年龄(AoI)的新性能指标正被应用于传输调度器的设计中。然而,大多数 AoI 调度器都依赖于迭代优化方法,很难适应实时变化,特别是在现实世界的 5G 部署场景中,网络条件是高度动态的。此外,它们还忽视了连续违反 AoI 截止日期的影响,这将导致长时间的信息缺失。为了解决这些局限性,我们提出了一种能应对动态网络条件的 5G 调度器,目的是在截止日期限制下最大限度地减少长期平均 AoI。具体来说,我们考虑了密集的城市海量机器型通信(mMTC)场景,在这种场景中,众多物联网(IoT)设备在时变信道条件下频繁加入或离开网络。为促进实时适应,我们开发了一种按时隙调度的方法,该方法可为每个时隙做出局部最优决策,而无需大量迭代。此外,我们还将每个时隙调度方法与优先级规则调度算法相结合,以满足 5G 严格的时序要求。仿真结果表明,与其他 AoI 调度器相比,所提出的调度器可将平均 AoI 降低 10%,截止日期违规率降低 40%,连续违规率降低 20%。
{"title":"A priority-aware dynamic scheduling algorithm for ensuring data freshness in 5G networks","authors":"Beom-Su Kim","doi":"10.1016/j.future.2024.107542","DOIUrl":"10.1016/j.future.2024.107542","url":null,"abstract":"<div><div>To ensure the freshness of information in wireless communication systems, a new performance metric named the age of information (AoI) is being adopted in the design of transmission schedulers. However, most AoI schedulers rely on iterative optimization methods, which struggle to adapt to real-time changes, particularly in real-world 5G deployment scenarios, where network conditions are highly dynamic. In addition, they neglect the impact of consecutive AoI deadline violations, which result in prolonged information deficits. To address these limitations, we present a 5G scheduler that can cope with dynamic network conditions, with the aim of minimizing the long-term average AoI under deadline constraints. Specifically, we consider a dense urban massive machine-type communication (mMTC) scenario in which numerous Internet of Things (IoT) devices frequently join or leave the network under time-varying channel conditions. To facilitate real-time adaptation, we develop a per-slot scheduling method that makes locally optimal decisions for each slot without requiring extensive iterations. In addition, we combine the per-slot scheduling method with a priority-rule scheduling algorithm to satisfy the stringent timing requirements of 5G. The simulation results show that the proposed scheduler reduces the average AoI by 10%, deadline violation rate by 40%, and consecutive violation rate by 20% approximately compared with other AoI schedulers.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"163 ","pages":"Article 107542"},"PeriodicalIF":6.2,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142359393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Connected vehicles ecological driving based on deep reinforce learning: Application of Web 3.0 technologies in traffic optimization 基于深度强化学习的车联网生态驾驶:Web 3.0 技术在交通优化中的应用
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-26 DOI: 10.1016/j.future.2024.107544
Minghui Ma , Xu Han , Shidong Liang , Yansong Wang , Lan Jiang
With the fast development of Web3.0 technology, connected vehicles can now handle and communicate data more safely and effectively. When combined with 5G/6G communication technology, these vehicles can optimize emissions in transportation networks to a greater extent. This study proposed an ecologically car-following model from natural driving data by using deep reinforcement learning under the context of the rapid development of Web 3.0 technologies. Firstly, by utilizing naturalistic driving data, an environment for connected vehicle car-following is created. Secondly, this paper uses SAC (Soft Actor-Critic) deep reinforcement learning algorithm and designs novel reward function based on ecological driving principles and car-following characteristics to reduce fuel consumption and emissions while maintaining safe distance with leading vehicle. Subsequently, the established model is tested, and results indicate that model not only performs well in terms of collision occurrences, Time-to-Collision (TTC), and driving comfort on test set but also achieves reduction of 5.50% in fuel consumption and reductions of 15.04%, 5.63%, and 9.60% in pollutant emissions (NOx, CO, and HC) compared to naturalistic manually driven vehicles.
随着 Web3.0 技术的快速发展,互联车辆现在可以更安全、更有效地处理和交流数据。当与 5G/6G 通信技术相结合时,这些车辆可以在更大程度上优化交通网络中的排放。在 Web 3.0 技术飞速发展的背景下,本研究利用深度强化学习,从自然驾驶数据中提出了一种生态学汽车跟随模型。首先,通过利用自然驾驶数据,创建了车联网汽车跟车环境。其次,本文采用 SAC(Soft Actor-Critic)深度强化学习算法,并根据生态驾驶原则和跟车特性设计了新颖的奖励函数,以降低油耗和排放,同时与前车保持安全距离。随后,对建立的模型进行了测试,结果表明,与自然手动驾驶车辆相比,该模型不仅在碰撞发生率、碰撞时间(TTC)和驾驶舒适性方面表现出色,而且在油耗方面降低了 5.50%,在污染物排放(氮氧化物、一氧化碳和碳氢化合物)方面分别降低了 15.04%、5.63% 和 9.60%。
{"title":"Connected vehicles ecological driving based on deep reinforce learning: Application of Web 3.0 technologies in traffic optimization","authors":"Minghui Ma ,&nbsp;Xu Han ,&nbsp;Shidong Liang ,&nbsp;Yansong Wang ,&nbsp;Lan Jiang","doi":"10.1016/j.future.2024.107544","DOIUrl":"10.1016/j.future.2024.107544","url":null,"abstract":"<div><div>With the fast development of Web3.0 technology, connected vehicles can now handle and communicate data more safely and effectively. When combined with 5G/6G communication technology, these vehicles can optimize emissions in transportation networks to a greater extent. This study proposed an ecologically car-following model from natural driving data by using deep reinforcement learning under the context of the rapid development of Web 3.0 technologies. Firstly, by utilizing naturalistic driving data, an environment for connected vehicle car-following is created. Secondly, this paper uses SAC (Soft Actor-Critic) deep reinforcement learning algorithm and designs novel reward function based on ecological driving principles and car-following characteristics to reduce fuel consumption and emissions while maintaining safe distance with leading vehicle. Subsequently, the established model is tested, and results indicate that model not only performs well in terms of collision occurrences, Time-to-Collision (TTC), and driving comfort on test set but also achieves reduction of 5.50% in fuel consumption and reductions of 15.04%, 5.63%, and 9.60% in pollutant emissions (NOx, CO, and HC) compared to naturalistic manually driven vehicles.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"163 ","pages":"Article 107544"},"PeriodicalIF":6.2,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MITgcm-AD v2: Open source tangent linear and adjoint modeling framework for the oceans and atmosphere enabled by the Automatic Differentiation tool Tapenade MITgcm-AD v2:由自动微分工具 Tapenade 支持的海洋和大气切线及邻接建模开源框架
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-23 DOI: 10.1016/j.future.2024.107512
Shreyas Sunil Gaikwad , Sri Hari Krishna Narayanan , Laurent Hascoët , Jean-Michel Campin , Helen Pillar , An Nguyen , Jan Hückelheim , Paul Hovland , Patrick Heimbach
The Massachusetts Institute of Technology General Circulation Model (MITgcm) is widely used by the climate science community to simulate planetary atmosphere and ocean circulations. A defining feature of the MITgcm is that it has been developed to be compatible with an algorithmic differentiation (AD) tool, TAF, enabling the generation of tangent-linear and adjoint models. These provide gradient information which enables dynamics-based sensitivity and attribution studies, state and parameter estimation, and rigorous uncertainty quantification. Importantly, gradient information is essential for computing comprehensive sensitivities and performing efficient large-scale data assimilation, ensuring that observations collected from satellites and in-situ measuring instruments can be effectively used to optimize a large uncertain control space. As a result, the MITgcm forms the dynamical core of a key data assimilation product employed by the physical oceanography research community: Estimating the Circulation and Climate of the Ocean (ECCO) state estimate. Although MITgcm and ECCO are used extensively within the research community, the AD tool TAF is proprietary and hence inaccessible to a large proportion of these users.
The new version 2 (MITgcm-AD v2) framework introduced here is based on the source-to-source AD tool Tapenade, which has recently been open-sourced. Another feature of Tapenade is that it stores required variables by default (instead of recomputing them) which simplifies the implementation of efficient, AD-compatible code. The framework has been integrated with the MITgcm model’s main branch and is now freely available.
麻省理工学院大气环流模式(MITgcm)被气候科学界广泛用于模拟行星大气和海洋环流。麻省理工学院大气环流模型的一个显著特点是,它与算法微分(AD)工具 TAF 兼容,能够生成切线和邻接模型。这些模型提供梯度信息,可用于基于动力学的敏感性和归因研究、状态和参数估计以及严格的不确定性量化。重要的是,梯度信息对于计算综合敏感性和执行高效的大规模数据同化至关重要,可确保卫星和现场测量仪器收集的观测数据能有效用于优化大型不确定控制空间。因此,MITgcm 构成了物理海洋学研究界使用的关键数据同化产品的动态核心:Estimating the Circulation and Climate of the Ocean (ECCO) state estimate。尽管 MITgcm 和 ECCO 在研究界被广泛使用,但 AD 工具 TAF 是专有的,因此这些用户中的很大一部分无法使用。本文介绍的新版本 2(MITgcm-AD v2)框架基于源对源 AD 工具 Tapenade,该工具最近已开源。Tapenade 的另一个特点是默认存储所需的变量(而不是重新计算变量),从而简化了高效的 AD 兼容代码的实现。该框架已与 MITgcm 模型的主分支集成,现在可以免费使用。
{"title":"MITgcm-AD v2: Open source tangent linear and adjoint modeling framework for the oceans and atmosphere enabled by the Automatic Differentiation tool Tapenade","authors":"Shreyas Sunil Gaikwad ,&nbsp;Sri Hari Krishna Narayanan ,&nbsp;Laurent Hascoët ,&nbsp;Jean-Michel Campin ,&nbsp;Helen Pillar ,&nbsp;An Nguyen ,&nbsp;Jan Hückelheim ,&nbsp;Paul Hovland ,&nbsp;Patrick Heimbach","doi":"10.1016/j.future.2024.107512","DOIUrl":"10.1016/j.future.2024.107512","url":null,"abstract":"<div><div>The Massachusetts Institute of Technology General Circulation Model (MITgcm) is widely used by the climate science community to simulate planetary atmosphere and ocean circulations. A defining feature of the MITgcm is that it has been developed to be compatible with an algorithmic differentiation (AD) tool, TAF, enabling the generation of tangent-linear and adjoint models. These provide gradient information which enables dynamics-based sensitivity and attribution studies, state and parameter estimation, and rigorous uncertainty quantification. Importantly, gradient information is essential for computing comprehensive sensitivities and performing efficient large-scale data assimilation, ensuring that observations collected from satellites and in-situ measuring instruments can be effectively used to optimize a large uncertain control space. As a result, the MITgcm forms the dynamical core of a key data assimilation product employed by the physical oceanography research community: Estimating the Circulation and Climate of the Ocean (ECCO) state estimate. Although MITgcm and ECCO are used extensively within the research community, the AD tool TAF is proprietary and hence inaccessible to a large proportion of these users.</div><div>The new version 2 (MITgcm-AD v2) framework introduced here is based on the source-to-source AD tool Tapenade, which has recently been open-sourced. Another feature of Tapenade is that it stores required variables by default (instead of recomputing them) which simplifies the implementation of efficient, AD-compatible code. The framework has been integrated with the MITgcm model’s main branch and is now freely available.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"163 ","pages":"Article 107512"},"PeriodicalIF":6.2,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142359304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-aware collaborative edge inference with embedded devices for IIoT 面向 IIoT 的嵌入式设备自我感知协作边缘推理
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-23 DOI: 10.1016/j.future.2024.107535
Yifan Chen , Zhuoquan Yu , Yi Jin , Christine Mwase , Xin Hu , Li Da Xu , Zhuo Zou , Lirong Zheng
Edge inference and other compute-intensive industrial Internet of Things (IIoT) applications suffer from a bad quality of experience due to the limited and heterogeneous computing and communication resources of embedded devices. To tackle these issues, we propose a model partitioning-based self-aware collaborative edge inference framework. Specifically, the device can adaptively adjust the local model inference scheme by sensing the available computing and communication resources of surrounding devices. When the inference latency requirement cannot be met by local computation, the model should be partitioned for collaborative computation on other devices to improve the inference efficiency. Furthermore, for two typical IIoT scenarios, i.e., bursting and stacking tasks, the latency-aware and throughput-aware collaborative inference algorithms are designed, respectively. Via jointly optimizing the partition layer and collaborative device selection, the optimal inference efficiency, characterized by minimum inference latency and maximum inference throughput, can be obtained. Finally, the performance of our proposal is validated through extensive simulations and tests conducted on 10 Raspberry Pi 4Bs using popular models. Specifically, in the case of two collaborative devices, our platform reaches up to 92.59% latency reduction for bursting tasks and 16.19× throughput growth for stacking tasks. In addition, the divergence between simulations and tests ranges from 1.64% to 9.56% for bursting tasks and from 3.24% to 11.24% for stacking tasks, which indicates that the theoretical performance analyses are solid. For the general case where the data privacy is not considered and the number of collaborative devices is optimally determined, up to 14.76× throughput speed up and 84.04% latency reduction can be obtained.
由于嵌入式设备的计算和通信资源有限且异构,边缘推理和其他计算密集型工业物联网(IIoT)应用的体验质量很差。为了解决这些问题,我们提出了一种基于模型分区的自感知协作边缘推理框架。具体来说,设备可以通过感知周围设备的可用计算和通信资源,自适应地调整本地模型推理方案。当本地计算无法满足推理延迟要求时,应将模型分割到其他设备上进行协同计算,以提高推理效率。此外,针对突发任务和堆叠任务这两种典型的物联网场景,分别设计了延迟感知协同推理算法和吞吐量感知协同推理算法。通过联合优化分区层和协作设备选择,可以获得以最小推理延迟和最大推理吞吐量为特征的最佳推理效率。最后,我们在 10 个使用流行模型的 Raspberry Pi 4B 上进行了大量模拟和测试,验证了我们建议的性能。具体地说,在两个协作设备的情况下,我们的平台在突发任务中减少了 92.59% 的延迟,在堆叠任务中提高了 16.19 倍的吞吐量。此外,对于突发任务,模拟与测试之间的差异在 1.64% 到 9.56% 之间,对于堆叠任务,差异在 3.24% 到 11.24% 之间,这表明理论性能分析是可靠的。在不考虑数据隐私并优化确定协作设备数量的一般情况下,吞吐量可提高 14.76 倍,延迟可减少 84.04%。
{"title":"Self-aware collaborative edge inference with embedded devices for IIoT","authors":"Yifan Chen ,&nbsp;Zhuoquan Yu ,&nbsp;Yi Jin ,&nbsp;Christine Mwase ,&nbsp;Xin Hu ,&nbsp;Li Da Xu ,&nbsp;Zhuo Zou ,&nbsp;Lirong Zheng","doi":"10.1016/j.future.2024.107535","DOIUrl":"10.1016/j.future.2024.107535","url":null,"abstract":"<div><div>Edge inference and other compute-intensive industrial Internet of Things (IIoT) applications suffer from a bad quality of experience due to the limited and heterogeneous computing and communication resources of embedded devices. To tackle these issues, we propose a model partitioning-based self-aware collaborative edge inference framework. Specifically, the device can adaptively adjust the local model inference scheme by sensing the available computing and communication resources of surrounding devices. When the inference latency requirement cannot be met by local computation, the model should be partitioned for collaborative computation on other devices to improve the inference efficiency. Furthermore, for two typical IIoT scenarios, i.e., bursting and stacking tasks, the latency-aware and throughput-aware collaborative inference algorithms are designed, respectively. Via jointly optimizing the partition layer and collaborative device selection, the optimal inference efficiency, characterized by minimum inference latency and maximum inference throughput, can be obtained. Finally, the performance of our proposal is validated through extensive simulations and tests conducted on 10 Raspberry Pi 4Bs using popular models. Specifically, in the case of two collaborative devices, our platform reaches up to 92.59% latency reduction for bursting tasks and 16.19<span><math><mo>×</mo></math></span> throughput growth for stacking tasks. In addition, the divergence between simulations and tests ranges from 1.64% to 9.56% for bursting tasks and from 3.24% to 11.24% for stacking tasks, which indicates that the theoretical performance analyses are solid. For the general case where the data privacy is not considered and the number of collaborative devices is optimally determined, up to 14.76<span><math><mo>×</mo></math></span> throughput speed up and 84.04% latency reduction can be obtained.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"163 ","pages":"Article 107535"},"PeriodicalIF":6.2,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JuMonC: A RESTful tool for enabling monitoring and control of simulations at scale JuMonC:用于大规模监测和控制模拟的 RESTful 工具
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-23 DOI: 10.1016/j.future.2024.107541
Christian Witzler , Filipe Souza Mendes Guimarães , Daniel Mira , Hartwig Anzt , Jens Henrik Göbbert , Wolfgang Frings , Mathis Bode
As systems and simulations grow in size and complexity, it is challenging to maintain efficient use of resources and avoid failures. In this scenario, monitoring becomes even more important and mandatory. This paper describes and discusses the benefits of the advanced monitoring and control tool JuMonC, which runs under user control alongside HPC simulations and provides valuable metrics via REST-API. In addition, plugin extensibility allows JuMonC to go a step further and provide computational steering of the simulation itself. To demonstrate the benefits and usability of JuMonC for large-scale simulations, two use cases are described employing nekRS and ICON on JURECA-DC, a supercomputer located at the Jülich Supercomputing Centre (JSC). Furthermore, a large-scale use case with nekRS on JSC’s flagship system JUWELS Booster is described. Finally, the interplay between JuMonC and LLview (a standard monitoring tool for HPC systems) is presented using a simple and secure JuMonC-LLview plugin, which collects performance metrics and enables their analysis in LLview. Overall, the portability and usefulness of JuMonC, together with its low performance impact, make it an important application for both current and future generations of exascale HPC systems.
随着系统和模拟的规模和复杂性不断增加,保持资源的有效利用并避免故障是一项挑战。在这种情况下,监控变得更加重要和必要。本文介绍并讨论了高级监控工具 JuMonC 的优势,该工具在用户控制下与 HPC 仿真一起运行,并通过 REST-API 提供有价值的指标。此外,插件的可扩展性使 JuMonC 能够更进一步,为仿真本身提供计算指导。为了展示 JuMonC 在大规模仿真方面的优势和可用性,我们介绍了在 JURECA-DC 超级计算机上使用 nekRS 和 ICON 的两个用例,JURECA-DC 超级计算机位于尤里希超级计算中心(JSC)。此外,还介绍了在 JSC 的旗舰系统 JUWELS Booster 上使用 nekRS 的大规模使用案例。最后,还介绍了 JuMonC 与 LLview(高性能计算系统的标准监控工具)之间的相互作用,使用简单安全的 JuMonC-LLview 插件收集性能指标,并在 LLview 中进行分析。总之,JuMonC 的可移植性和实用性,以及对性能的低影响,使其成为当前和未来几代超大规模高性能计算系统的重要应用。
{"title":"JuMonC: A RESTful tool for enabling monitoring and control of simulations at scale","authors":"Christian Witzler ,&nbsp;Filipe Souza Mendes Guimarães ,&nbsp;Daniel Mira ,&nbsp;Hartwig Anzt ,&nbsp;Jens Henrik Göbbert ,&nbsp;Wolfgang Frings ,&nbsp;Mathis Bode","doi":"10.1016/j.future.2024.107541","DOIUrl":"10.1016/j.future.2024.107541","url":null,"abstract":"<div><div>As systems and simulations grow in size and complexity, it is challenging to maintain efficient use of resources and avoid failures. In this scenario, monitoring becomes even more important and mandatory. This paper describes and discusses the benefits of the advanced monitoring and control tool JuMonC, which runs under user control alongside HPC simulations and provides valuable metrics via REST-API. In addition, plugin extensibility allows JuMonC to go a step further and provide computational steering of the simulation itself. To demonstrate the benefits and usability of JuMonC for large-scale simulations, two use cases are described employing nekRS and ICON on JURECA-DC, a supercomputer located at the Jülich Supercomputing Centre (JSC). Furthermore, a large-scale use case with nekRS on JSC’s flagship system JUWELS Booster is described. Finally, the interplay between JuMonC and LLview (a standard monitoring tool for HPC systems) is presented using a simple and secure JuMonC-LLview plugin, which collects performance metrics and enables their analysis in LLview. Overall, the portability and usefulness of JuMonC, together with its low performance impact, make it an important application for both current and future generations of exascale HPC systems.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"164 ","pages":"Article 107541"},"PeriodicalIF":6.2,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142651661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Popularity-based multiple-replica cloud storage integrity auditing for big data 面向大数据的基于流行度的多副本云存储完整性审计
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-09-21 DOI: 10.1016/j.future.2024.107534
Guoqing Chen , Rong Hao , Ming Yang
Multiple-replica cloud storage is employed to store multiple replicas of the user’s data in different cloud servers, which remarkably enhances the data availability. To ensure data replicas are correctly stored in cloud servers, multiple-replica cloud storage integrity auditing is proposed. Nevertheless, storing multiple replicas in cloud is not always necessary for all data, which reduces the storage efficiency for big data. In this paper, we consider a new problem of how to make a tradeoff between data availability and storage efficiency for cloud storage integrity auditing. When a file changes from an unpopular file to a popular one, this file will not be viewed as an important file any more. If we continue to store multiple replicas of the file for good availability, it will waste a lot of storage resources in big data scenario. Therefore, the tradeoff between data availability and storage efficiency is a significant issue. We propose a novel scheme called popularity-based multiple-replica cloud storage integrity auditing scheme. We introduce popularity into the multiple-replica cloud storage integrity auditing scheme to intelligently depict the data importance. For unpopular cloud data (important data), we adopt multiple-replica cloud storage technique to store them. In contrast, we only store a single replica for popular cloud data (unimportant data). Our proposed scheme can smoothly perform the auditing task for both unpopular cloud data and popular cloud data. As a result, it makes a nice balance between data availability and storage efficiency of cloud storage integrity auditing for big data. Furthermore, we discuss how to support the possible changes in data popularity after dynamic operations. We prove the security and make performance analysis for the proposed scheme.
多副本云存储是指将用户数据的多个副本存储在不同的云服务器中,从而显著提高数据的可用性。为确保数据副本正确存储在云服务器中,提出了多副本云存储完整性审计。然而,并非所有数据都需要在云中存储多个副本,这就降低了大数据的存储效率。在本文中,我们考虑了一个新问题,即如何在数据可用性和云存储完整性审计的存储效率之间做出权衡。当一个文件从一个不受欢迎的文件变成一个受欢迎的文件时,这个文件将不再被视为重要文件。如果我们继续存储该文件的多个副本以获得良好的可用性,在大数据场景中将浪费大量存储资源。因此,数据可用性和存储效率之间的权衡是一个重要问题。我们提出了一种新方案,称为基于流行度的多副本云存储完整性审计方案。我们在多副本云存储完整性审计方案中引入了流行度,以智能地描述数据的重要性。对于不受欢迎的云数据(重要数据),我们采用多副本云存储技术进行存储。相比之下,对于受欢迎的云数据(不重要数据),我们只存储一个副本。我们提出的方案既能顺利执行非热门云数据的审计任务,也能执行热门云数据的审计任务。因此,它在大数据云存储完整性审计的数据可用性和存储效率之间取得了很好的平衡。此外,我们还讨论了如何支持动态操作后数据流行度的可能变化。我们证明了所提方案的安全性,并对其进行了性能分析。
{"title":"Popularity-based multiple-replica cloud storage integrity auditing for big data","authors":"Guoqing Chen ,&nbsp;Rong Hao ,&nbsp;Ming Yang","doi":"10.1016/j.future.2024.107534","DOIUrl":"10.1016/j.future.2024.107534","url":null,"abstract":"<div><div>Multiple-replica cloud storage is employed to store multiple replicas of the user’s data in different cloud servers, which remarkably enhances the data availability. To ensure data replicas are correctly stored in cloud servers, multiple-replica cloud storage integrity auditing is proposed. Nevertheless, storing multiple replicas in cloud is not always necessary for all data, which reduces the storage efficiency for big data. In this paper, we consider a new problem of how to make a tradeoff between data availability and storage efficiency for cloud storage integrity auditing. When a file changes from an unpopular file to a popular one, this file will not be viewed as an important file any more. If we continue to store multiple replicas of the file for good availability, it will waste a lot of storage resources in big data scenario. Therefore, the tradeoff between data availability and storage efficiency is a significant issue. We propose a novel scheme called popularity-based multiple-replica cloud storage integrity auditing scheme. We introduce <span><math><mrow><mi>p</mi><mi>o</mi><mi>p</mi><mi>u</mi><mi>l</mi><mi>a</mi><mi>r</mi><mi>i</mi><mi>t</mi><mi>y</mi></mrow></math></span> into the multiple-replica cloud storage integrity auditing scheme to intelligently depict the data importance. For unpopular cloud data (important data), we adopt multiple-replica cloud storage technique to store them. In contrast, we only store a single replica for popular cloud data (unimportant data). Our proposed scheme can smoothly perform the auditing task for both unpopular cloud data and popular cloud data. As a result, it makes a nice balance between data availability and storage efficiency of cloud storage integrity auditing for big data. Furthermore, we discuss how to support the possible changes in data popularity after dynamic operations. We prove the security and make performance analysis for the proposed scheme.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"163 ","pages":"Article 107534"},"PeriodicalIF":6.2,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142314670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Future Generation Computer Systems-The International Journal of Escience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1