首页 > 最新文献

Performance Evaluation最新文献

英文 中文
Network-calculus service curves of the interleaved regulator 交错调节器的网络计算服务曲线
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-01 Epub Date: 2024-09-02 DOI: 10.1016/j.peva.2024.102443
Ludovic Thomas , Jean-Yves Le Boudec

The interleaved regulator (implemented by IEEE TSN Asynchronous Traffic Shaping) is used in time-sensitive networks for reshaping the flows with per-flow contracts. When applied to an aggregate of flows that come from a FIFO system, an interleaved regulator that reshapes the flows with their initial contracts does not increase the worst-case delay of the aggregate. This shaping-for-free property supports the computation of end-to-end latency bounds and the validation of the network’s timing requirements. A common method to establish the properties of a network element is to obtain a network-calculus service-curve model. The existence of such a model for the interleaved regulator remains an open question. If a service-curve model were found for the interleaved regulator, then the analysis of this mechanism would no longer be limited to the situations where the shaping-for-free holds, which would widen its use in time-sensitive networks. In this paper, we investigate if network-calculus service curves can capture the behavior of the interleaved regulator. For an interleaved regulator that is placed outside of the shaping-for-free requirements (after a non-FIFO system), we develop Spring, an adversarial traffic generation that yields unbounded latencies. Consequently, we prove that no network-calculus service curve exists to explain the interleaved regulator’s behavior. It is still possible to find non-trivial service curves for the interleaved regulator. However, their long-term rate cannot be large enough to provide any guarantee. Specifically, we prove that for the regulators that process at least four flows with the same contract, the long-term rate of any service curve is upper bounded by three times the rate of the per-flow contract.

交错调节器(由 IEEE TSN 异步流量整形实现)可用于时间敏感型网络,以按流量合约重塑流量。当应用于来自先进先出系统的流量集合时,交错调节器根据流量的初始合约对流量进行重塑,不会增加集合的最坏情况延迟。这种 "无整形 "特性有助于计算端到端延迟界限和验证网络的时序要求。建立网元属性的常用方法是获取网络计算服务曲线模型。交错调节器是否存在这样的模型仍是一个未决问题。如果能找到交错调节器的服务曲线模型,那么对该机制的分析将不再局限于无整形的情况,这将扩大其在时间敏感网络中的应用。在本文中,我们研究了网络计算服务曲线能否捕捉交错调节器的行为。对于置于无整形要求之外的交错调节器(在非 FIFO 系统之后),我们开发了 Spring,一种产生无限制延迟的对抗性流量生成。因此,我们证明不存在网络计算服务曲线来解释交错调节器的行为。我们仍有可能为交错调节器找到非三维服务曲线。然而,它们的长期速率不可能大到足以提供任何保证。具体地说,我们证明了对于用相同合约处理至少四个流量的调节器来说,任何服务曲线的长期速率的上界都是每流量合约速率的三倍。
{"title":"Network-calculus service curves of the interleaved regulator","authors":"Ludovic Thomas ,&nbsp;Jean-Yves Le Boudec","doi":"10.1016/j.peva.2024.102443","DOIUrl":"10.1016/j.peva.2024.102443","url":null,"abstract":"<div><p>The interleaved regulator (implemented by IEEE TSN Asynchronous Traffic Shaping) is used in time-sensitive networks for reshaping the flows with per-flow contracts. When applied to an aggregate of flows that come from a FIFO system, an interleaved regulator that reshapes the flows with their initial contracts does not increase the worst-case delay of the aggregate. This shaping-for-free property supports the computation of end-to-end latency bounds and the validation of the network’s timing requirements. A common method to establish the properties of a network element is to obtain a network-calculus service-curve model. The existence of such a model for the interleaved regulator remains an open question. If a service-curve model were found for the interleaved regulator, then the analysis of this mechanism would no longer be limited to the situations where the shaping-for-free holds, which would widen its use in time-sensitive networks. In this paper, we investigate if network-calculus service curves can capture the behavior of the interleaved regulator. For an interleaved regulator that is placed outside of the shaping-for-free requirements (after a non-FIFO system), we develop Spring, an adversarial traffic generation that yields unbounded latencies. Consequently, we prove that no network-calculus service curve exists to explain the interleaved regulator’s behavior. It is still possible to find non-trivial service curves for the interleaved regulator. However, their long-term rate cannot be large enough to provide any guarantee. Specifically, we prove that for the regulators that process at least four flows with the same contract, the long-term rate of any service curve is upper bounded by three times the rate of the per-flow contract.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"166 ","pages":"Article 102443"},"PeriodicalIF":1.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Security-reliability trade-off analysis for transmit antenna selection in cognitive ambient backscatter communications 认知环境反向散射通信中发射天线选择的安全性-可靠性权衡分析
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-01 Epub Date: 2024-08-26 DOI: 10.1016/j.peva.2024.102441
Ahmed N. Elbattrawy , Ahmed H. Abd El-Malek , Sherif I. Rabia , Waheed K. Zahra

Massive deployment of IoT devices raises the need for energy-efficient spectrum-efficient low-cost communications. Ambient backscatter communication (AmBC) technology provides a promising solution to achieve that. Moreover, incorporating AmBC with cognitive radio networks (CRNs) achieves better spectrum efficiency; however, this comes with performance drawbacks. In this work, we investigate the security and reliability performance of an underlay CRN with AmBC, where the backscattering device (BD) exploits the radio frequency (RF) signals of the secondary transmitter (ST), and both the ST and the BD share a common receiver. Different from previous work, we consider an ST with multiple antenna. The ST employs a transmit antenna selection (TAS) scheme to enhance the ST performance and overcome the performance degradation caused by the BD interference. TAS exploits multiple antenna diversity with lower hardware complexity and power consumption. Considering the Nakagami-m fading model, closed-form expressions are derived for the outage probability (OP) and intercept probability (IP) of both the ST and the BD transmissions at the legitimate receiver and the eavesdropper. Moreover, the asymptotic behavior of OPs and IPs is also investigated in the high signal-to-noise ratio regime and the high main-to-eavesdropper ratio regime, respectively. Monte Carlo simulations are performed to validate the derived closed-form expressions. Numerical results show that employing TAS enhances the ST and BD reliability performance by percentages up to 98% and 80%, respectively, at high primary user interference threshold values. Moreover, it results in a better security-reliability trade-off for the ST and the BD.

物联网设备的大规模部署提高了对高能效、低成本频谱通信的需求。环境反向散射通信(AmBC)技术为实现这一目标提供了一种前景广阔的解决方案。此外,将 AmBC 与认知无线电网络(CRN)结合可实现更高的频谱效率,但同时也存在性能缺陷。在这项工作中,我们研究了采用 AmBC 的底层 CRN 的安全性和可靠性能,其中后向散射设备(BD)利用了副发射机(ST)的射频(RF)信号,而 ST 和 BD 共享一个共同的接收器。与之前的研究不同,我们考虑的是带有多天线的 ST。ST 采用发射天线选择(TAS)方案来提高 ST 性能,克服 BD 干扰造成的性能下降。TAS 利用多天线分集,降低了硬件复杂度和功耗。考虑到 Nakagami-m fading 模型,得出了 ST 和 BD 传输在合法接收器和窃听器处的中断概率 (OP) 和截获概率 (IP) 的闭式表达式。此外,还分别研究了高信噪比机制和高主-窃听器比机制下 OP 和 IP 的渐近行为。蒙特卡罗模拟验证了得出的闭式表达式。数值结果表明,在主用户干扰阈值较高的情况下,采用 TAS 可将 ST 和 BD 可靠性能分别提高 98% 和 80%。此外,它还为 ST 和 BD 带来了更好的安全性-可靠性权衡。
{"title":"Security-reliability trade-off analysis for transmit antenna selection in cognitive ambient backscatter communications","authors":"Ahmed N. Elbattrawy ,&nbsp;Ahmed H. Abd El-Malek ,&nbsp;Sherif I. Rabia ,&nbsp;Waheed K. Zahra","doi":"10.1016/j.peva.2024.102441","DOIUrl":"10.1016/j.peva.2024.102441","url":null,"abstract":"<div><p>Massive deployment of IoT devices raises the need for energy-efficient spectrum-efficient low-cost communications. Ambient backscatter communication (AmBC) technology provides a promising solution to achieve that. Moreover, incorporating AmBC with cognitive radio networks (CRNs) achieves better spectrum efficiency; however, this comes with performance drawbacks. In this work, we investigate the security and reliability performance of an underlay CRN with AmBC, where the backscattering device (BD) exploits the radio frequency (RF) signals of the secondary transmitter (ST), and both the ST and the BD share a common receiver. Different from previous work, we consider an ST with multiple antenna. The ST employs a transmit antenna selection (TAS) scheme to enhance the ST performance and overcome the performance degradation caused by the BD interference. TAS exploits multiple antenna diversity with lower hardware complexity and power consumption. Considering the Nakagami-<span><math><mi>m</mi></math></span> fading model, closed-form expressions are derived for the outage probability (OP) and intercept probability (IP) of both the ST and the BD transmissions at the legitimate receiver and the eavesdropper. Moreover, the asymptotic behavior of OPs and IPs is also investigated in the high signal-to-noise ratio regime and the high main-to-eavesdropper ratio regime, respectively. Monte Carlo simulations are performed to validate the derived closed-form expressions. Numerical results show that employing TAS enhances the ST and BD reliability performance by percentages up to 98% and 80%, respectively, at high primary user interference threshold values. Moreover, it results in a better security-reliability trade-off for the ST and the BD.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"166 ","pages":"Article 102441"},"PeriodicalIF":1.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142094687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An experimental study on beamforming architecture and full-duplex wireless across two operational outdoor massive MIMO networks 关于波束成形架构和两个室外大规模多输入多输出网络全双工无线技术的实验研究
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-01 Epub Date: 2024-09-27 DOI: 10.1016/j.peva.2024.102447
Hadi Hosseini , Ahmed Almutairi , Syed Muhammad Hashir , Ehsan Aryafar , Joseph Camp
Full-duplex (FD) wireless communication refers to a communication system in which both ends of a wireless link transmit and receive data simultaneously in the same frequency band. One of the major challenges of FD communication is self-interference (SI), which refers to the interference caused by transmitting elements of a radio to its own receiving elements. Fully digital beamforming is a technique used to conduct beamforming and has been recently repurposed to also reduce SI. However, the cost of fully digital systems dramatically increases with the number of antennas, as each antenna requires an independent Tx-Rx RF chain. Hybrid beamforming systems use a much smaller number of RF chains to feed the same number of antennas, and hence can significantly reduce the deployment cost. In this paper, we aim to quantify the performance gap between these two radio architectures in terms of SI cancellation and system capacity in FD multi-user Multiple Input Multiple Output (MIMO) setups. We first obtained over-the-air channel measurement data on two outdoor massive MIMO deployments over the course of three months. We next study SoftNull and M-HBFD as two state-of-the-art transmit (Tx) beamforming based FD systems, and introduce two new joint transmit-receive (Tx-Rx) beamforming based FD systems named TR-FD2 and TR-HBFD for fully digital and hybrid radio architectures, respectively. We show that the hybrid beamforming systems can achieve 80%–99% of the fully digital systems capacity, depending on the number of users. Our results show that it is possible to get many benefits associated with fully digital massive MIMO systems with a hybrid beamforming architecture at a fraction of the cost.
全双工(FD)无线通信是指无线链路的两端在同一频段同时发送和接收数据的通信系统。全双工通信的主要挑战之一是自干扰(SI),即无线电发射元件对自身接收元件的干扰。全数字波束成形是一种用于波束成形的技术,最近被重新用于减少 SI。然而,全数字系统的成本随着天线数量的增加而急剧增加,因为每个天线都需要一个独立的 Tx-Rx 射频链。混合波束成形系统使用数量少得多的射频链来馈送相同数量的天线,因此可以显著降低部署成本。本文旨在量化这两种无线电架构在 FD 多用户多输入多输出(MIMO)设置中的 SI 消除和系统容量方面的性能差距。我们首先获得了两个室外大规模 MIMO 部署的空中信道测量数据,历时三个月。接下来,我们研究了 SoftNull 和 M-HBFD 这两种最先进的基于发射(Tx)波束成形的 FD 系统,并引入了两种新的基于发射-接收(Tx-Rx)联合波束成形的 FD 系统,分别命名为 TR-FD2 和 TR-HBFD,用于全数字和混合无线电架构。我们的研究表明,根据用户数量的不同,混合波束成形系统的容量可达到全数字系统容量的 80%-99%。我们的研究结果表明,采用混合波束成形架构的全数字大规模多输入多输出(MIMO)系统可以以极低的成本获得许多相关优势。
{"title":"An experimental study on beamforming architecture and full-duplex wireless across two operational outdoor massive MIMO networks","authors":"Hadi Hosseini ,&nbsp;Ahmed Almutairi ,&nbsp;Syed Muhammad Hashir ,&nbsp;Ehsan Aryafar ,&nbsp;Joseph Camp","doi":"10.1016/j.peva.2024.102447","DOIUrl":"10.1016/j.peva.2024.102447","url":null,"abstract":"<div><div>Full-duplex (FD) wireless communication refers to a communication system in which both ends of a wireless link transmit and receive data simultaneously in the same frequency band. One of the major challenges of FD communication is self-interference (SI), which refers to the interference caused by transmitting elements of a radio to its own receiving elements. Fully digital beamforming is a technique used to conduct beamforming and has been recently repurposed to also reduce SI. However, the cost of fully digital systems dramatically increases with the number of antennas, as each antenna requires an independent Tx-Rx RF chain. Hybrid beamforming systems use a much smaller number of RF chains to feed the same number of antennas, and hence can significantly reduce the deployment cost. In this paper, we aim to quantify the performance gap between these two radio architectures in terms of SI cancellation and system capacity in FD multi-user Multiple Input Multiple Output (MIMO) setups. We first obtained over-the-air channel measurement data on two outdoor massive MIMO deployments over the course of three months. We next study SoftNull and M-HBFD as two state-of-the-art transmit (Tx) beamforming based FD systems, and introduce two new joint transmit-receive (Tx-Rx) beamforming based FD systems named TR-FD<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span> and TR-HBFD for fully digital and hybrid radio architectures, respectively. We show that the hybrid beamforming systems can achieve 80%–99% of the fully digital systems capacity, depending on the number of users. Our results show that it is possible to get many benefits associated with fully digital massive MIMO systems with a hybrid beamforming architecture at a fraction of the cost.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"166 ","pages":"Article 102447"},"PeriodicalIF":1.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142427974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Probabilistic performance evaluation of the class-A device in LoRaWAN protocol on the MAC layer LoRaWAN 协议中 A 类设备在 MAC 层的概率性能评估
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-01 Epub Date: 2024-09-21 DOI: 10.1016/j.peva.2024.102446
Mi Chen , Lynda Mokdad , Jalel Ben-Othman , Jean-Michel Fourneau
LoRaWAN is a network technology that provides a long-range wireless network while maintaining low energy consumption. It adopts the pure Aloha MAC protocol and the duty-cycle limitation at both uplink and downlink on the MAC layer to conserve energy. Additionally, LoRaWAN employs orthogonal parameters to mitigate collisions. However, synchronization in star-of-star topology networks and the complicated collision mechanism make it challenging to conduct a quantitative performance evaluation in LoRaWAN. Our previous work proposes a Probabilistic Timed Automata (PTA) model to represent the uplink transmission in LoRaWAN. It is a mathematical model that presents the nondeterministic and probabilistic choice with time passing. However, this model remains a work in progress. This study extends the PTA model to depict Class-A devices in the LoRaWAN protocol. The complete characteristics of LoRaWAN’s MAC layer, such as duty-cycle limits, bidirectional communication, and confirmed message transmission, are accurately modeled. Furthermore, a comprehensive collision model is integrated into the PTA. Various properties are verified using the probabilistic model checker PRISM, and quantitative properties are calculated under diverse scenarios. This quantitative analysis provides valuable insights into the performance and behavior of LoRaWAN networks under varying conditions.
LoRaWAN 是一种既能提供远距离无线网络,又能保持低能耗的网络技术。它采用纯 Aloha MAC 协议和 MAC 层上下行链路的占空比限制来节约能源。此外,LoRaWAN 采用正交参数来减少碰撞。然而,星形拓扑网络中的同步和复杂的碰撞机制使得在 LoRaWAN 中进行定量性能评估具有挑战性。我们之前的工作提出了一个概率定时自动机(PTA)模型来表示 LoRaWAN 中的上行链路传输。这是一个数学模型,它呈现了时间流逝的非确定性和概率选择。然而,这一模型仍在研究之中。本研究扩展了 PTA 模型,以描述 LoRaWAN 协议中的 A 类设备。LoRaWAN 的 MAC 层的全部特性,如占空比限制、双向通信和确认信息传输,都得到了精确建模。此外,PTA 中还集成了全面的碰撞模型。使用概率模型检查器 PRISM 验证了各种属性,并计算了各种情况下的定量属性。这种定量分析为了解 LoRaWAN 网络在不同条件下的性能和行为提供了宝贵的见解。
{"title":"Probabilistic performance evaluation of the class-A device in LoRaWAN protocol on the MAC layer","authors":"Mi Chen ,&nbsp;Lynda Mokdad ,&nbsp;Jalel Ben-Othman ,&nbsp;Jean-Michel Fourneau","doi":"10.1016/j.peva.2024.102446","DOIUrl":"10.1016/j.peva.2024.102446","url":null,"abstract":"<div><div>LoRaWAN is a network technology that provides a long-range wireless network while maintaining low energy consumption. It adopts the pure Aloha MAC protocol and the duty-cycle limitation at both uplink and downlink on the MAC layer to conserve energy. Additionally, LoRaWAN employs orthogonal parameters to mitigate collisions. However, synchronization in star-of-star topology networks and the complicated collision mechanism make it challenging to conduct a quantitative performance evaluation in LoRaWAN. Our previous work proposes a Probabilistic Timed Automata (PTA) model to represent the uplink transmission in LoRaWAN. It is a mathematical model that presents the nondeterministic and probabilistic choice with time passing. However, this model remains a work in progress. This study extends the PTA model to depict Class-A devices in the LoRaWAN protocol. The complete characteristics of LoRaWAN’s MAC layer, such as duty-cycle limits, bidirectional communication, and confirmed message transmission, are accurately modeled. Furthermore, a comprehensive collision model is integrated into the PTA. Various properties are verified using the probabilistic model checker PRISM, and quantitative properties are calculated under diverse scenarios. This quantitative analysis provides valuable insights into the performance and behavior of LoRaWAN networks under varying conditions.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"166 ","pages":"Article 102446"},"PeriodicalIF":1.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142314790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance evaluation of containers for low-latency packet processing in virtualized network environments 虚拟化网络环境中用于低延迟数据包处理的容器性能评估
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-01 Epub Date: 2024-08-28 DOI: 10.1016/j.peva.2024.102442
Florian Wiedner, Max Helm, Alexander Daichendt, Jonas Andre, Georg Carle

Packet processing in current network scenarios faces complex challenges due to the increasing prevalence of requirements such as low latency, high reliability, and resource sharing. Virtualization is a potential solution to mitigate these challenges by enabling resource sharing and on-demand provisioning; however, ensuring high reliability and ultra-low latency remains a key challenge. Since bare-metal systems are often impractical because of high cost and space usage, and the overhead of virtual machines (VMs) is substantial, we evaluate the utilization of containers as a potential lightweight solution for low-latency packet processing. Herein, we discuss the benefits and drawbacks and encourage container environments in low-latency packet processing when the degree of isolation of customer data is adequate and bare metal systems are unaffordable. Our results demonstrate that containers exhibit similar latency performance with more predictable tail-latency behavior than bare metal packet processing. Moreover, deciding which mainboard architecture to use, especially the cache division, is equally vital as containers are prone to higher latencies on more shared caches between cores especially when other optimizations cannot be used. We show that this has a higher impact on latencies within containers than on bare metal or VMs, resulting in the selection of hardware architectures following optimizations as a critical challenge. Furthermore, the results reveal that the virtualization overhead does not impact tail latencies.

由于低延迟、高可靠性和资源共享等要求日益普遍,当前网络场景中的数据包处理面临着复杂的挑战。虚拟化是缓解这些挑战的潜在解决方案,它可以实现资源共享和按需配置;然而,确保高可靠性和超低延迟仍然是一个关键挑战。由于裸机系统通常成本高、占用空间大,而且虚拟机(VM)的开销也很大,因此我们评估了利用容器作为低延迟数据包处理的潜在轻量级解决方案的可行性。在此,我们讨论了容器的优点和缺点,并鼓励在客户数据隔离程度足够且裸机系统无法负担的情况下,在低延迟数据包处理中使用容器环境。我们的研究结果表明,与裸机数据包处理相比,容器表现出相似的延迟性能和更可预测的尾延迟行为。此外,决定使用哪种主板架构,尤其是缓存划分也同样重要,因为容器在内核间共享缓存较多的情况下容易出现较高的延迟,尤其是在无法使用其他优化时。我们的研究表明,与裸机或虚拟机相比,这对容器内延迟的影响更大,因此在优化后选择硬件架构是一项关键挑战。此外,结果还显示,虚拟化开销不会影响尾部延迟。
{"title":"Performance evaluation of containers for low-latency packet processing in virtualized network environments","authors":"Florian Wiedner,&nbsp;Max Helm,&nbsp;Alexander Daichendt,&nbsp;Jonas Andre,&nbsp;Georg Carle","doi":"10.1016/j.peva.2024.102442","DOIUrl":"10.1016/j.peva.2024.102442","url":null,"abstract":"<div><p>Packet processing in current network scenarios faces complex challenges due to the increasing prevalence of requirements such as low latency, high reliability, and resource sharing. Virtualization is a potential solution to mitigate these challenges by enabling resource sharing and on-demand provisioning; however, ensuring high reliability and ultra-low latency remains a key challenge. Since bare-metal systems are often impractical because of high cost and space usage, and the overhead of virtual machines (VMs) is substantial, we evaluate the utilization of containers as a potential lightweight solution for low-latency packet processing. Herein, we discuss the benefits and drawbacks and encourage container environments in low-latency packet processing when the degree of isolation of customer data is adequate and bare metal systems are unaffordable. Our results demonstrate that containers exhibit similar latency performance with more predictable tail-latency behavior than bare metal packet processing. Moreover, deciding which mainboard architecture to use, especially the cache division, is equally vital as containers are prone to higher latencies on more shared caches between cores especially when other optimizations cannot be used. We show that this has a higher impact on latencies within containers than on bare metal or VMs, resulting in the selection of hardware architectures following optimizations as a critical challenge. Furthermore, the results reveal that the virtualization overhead does not impact tail latencies.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"166 ","pages":"Article 102442"},"PeriodicalIF":1.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0166531624000476/pdfft?md5=92c046df1bfad30f8dbdb77dadbb4fd5&pid=1-s2.0-S0166531624000476-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142137317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient handling of sporadic messages in FlexRay 在 FlexRay 中高效处理零星信息
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-01 Epub Date: 2024-09-06 DOI: 10.1016/j.peva.2024.102444
Sunil Kumar P.R. , Manjunath A.S. , Vinod V.

FlexRay is a high-bandwidth protocol that supports hard-deadline periodic and sporadic traffic in modern in-vehicle communication networks. The dynamic segment of FlexRay is used for transmitting hard deadline sporadic messages. In this paper, we describe an algorithm to minimize the duration of the dynamic segment in a FlexRay cycle, yielding better results than existing algorithms in the literature. The proposed algorithm consists of two phases. In the first phase, we assume that a sporadic message instance contends for service with only one instance of each higher-priority message. The lower bound provided by the first phase serves as the initial guess for the number of mini-slots used in the second phase, where an exact scheduling analysis is performed. In the second phase, a sporadic message may contend for service with multiple instances of each higher-priority message. This two-phase approach is efficient because the first phase has low overhead and its estimate greatly reduces the number of iterations needed in the second phase. We conducted experiments using the dataset provided in the literature as well as the SAE benchmark dataset. The experimental results demonstrate superior bandwidth minimization and computational efficiency compared to other algorithms.

FlexRay 是一种高带宽协议,支持现代车载通信网络中的硬限期定期和零星流量。FlexRay 的动态段用于传输硬限期零星信息。在本文中,我们介绍了一种最小化 FlexRay 循环中动态段持续时间的算法,其结果优于文献中的现有算法。所提出的算法包括两个阶段。在第一阶段,我们假设零星信息实例只与每个较高优先级信息的一个实例争夺服务。第一阶段提供的下限可作为第二阶段使用的迷你槽数量的初始猜测,第二阶段将进行精确的调度分析。在第二阶段,零星信息可能会与每个优先级较高信息的多个实例争夺服务。这种两阶段方法之所以高效,是因为第一阶段的开销较低,其估计值大大减少了第二阶段所需的迭代次数。我们使用文献中提供的数据集和 SAE 基准数据集进行了实验。实验结果表明,与其他算法相比,该算法在带宽最小化和计算效率方面更胜一筹。
{"title":"Efficient handling of sporadic messages in FlexRay","authors":"Sunil Kumar P.R. ,&nbsp;Manjunath A.S. ,&nbsp;Vinod V.","doi":"10.1016/j.peva.2024.102444","DOIUrl":"10.1016/j.peva.2024.102444","url":null,"abstract":"<div><p>FlexRay is a high-bandwidth protocol that supports hard-deadline periodic and sporadic traffic in modern in-vehicle communication networks. The dynamic segment of FlexRay is used for transmitting hard deadline sporadic messages. In this paper, we describe an algorithm to minimize the duration of the dynamic segment in a FlexRay cycle, yielding better results than existing algorithms in the literature. The proposed algorithm consists of two phases. In the first phase, we assume that a sporadic message instance contends for service with only one instance of each higher-priority message. The lower bound provided by the first phase serves as the initial guess for the number of mini-slots used in the second phase, where an exact scheduling analysis is performed. In the second phase, a sporadic message may contend for service with multiple instances of each higher-priority message. This two-phase approach is efficient because the first phase has low overhead and its estimate greatly reduces the number of iterations needed in the second phase. We conducted experiments using the dataset provided in the literature as well as the SAE benchmark dataset. The experimental results demonstrate superior bandwidth minimization and computational efficiency compared to other algorithms.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"166 ","pages":"Article 102444"},"PeriodicalIF":1.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal resource management for multi-access edge computing without using cross-layer communication 不使用跨层通信的多接入边缘计算的最优资源管理
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-01 Epub Date: 2024-09-12 DOI: 10.1016/j.peva.2024.102445
Ankita Koley, Chandramani Singh

We consider a Multi-access Edge Computing (MEC) system with a set of users, a base station (BS) with an attached MEC server, and a cloud server. The users can serve the service requests locally or can offload them to the BS which in turn can serve a subset of the offloaded requests at the MEC and can forward the requests to the cloud server. The user devices and the MEC server can be dynamically configured to serve different classes of services. The service requests offloaded to the BS incur offloading costs and those forwarded to the cloud incur additional costs; the costs could represent service charges or delays. Aggregate cost minimization subject to stability warrants optimal service scheduling and offloading at the users and the MEC server, at their application layers, and optimal uplink packet scheduling at the users’ MAC layers. Classical back-pressure (BP) based solutions entail cross-layer message exchange, and hence are not viable. We propose virtual queue-based drift-plus-penalty algorithms that are throughput optimal, and achieve the optimal delay arbitrarily closely but do not require cross-layer communication. We first consider an MEC system without local computation, and subsequently, extend our framework to incorporate local computation also. We demonstrate that the proposed algorithms offer almost the same performance as BP based algorithms. These algorithms contain tuneable parameters that offer a trade off between queue lengths at the users and the BS and the offloading costs.

我们考虑的多接入边缘计算(MEC)系统包含一组用户、一个带有 MEC 服务器的基站(BS)和一个云服务器。用户可以在本地提供服务请求,也可以将服务请求卸载到基站,而基站又可以在 MEC 上提供卸载请求的子集,并将请求转发到云服务器。用户设备和 MEC 服务器可以动态配置,以提供不同类别的服务。卸载到 BS 的服务请求会产生卸载成本,而转发到云的服务请求则会产生额外成本;这些成本可能是服务费或延迟。在保持稳定的前提下,总成本最小化要求在用户和 MEC 服务器的应用层实现最佳服务调度和卸载,并在用户的 MAC 层实现最佳上行链路数据包调度。基于背压(BP)的经典解决方案需要跨层信息交换,因此不可行。我们提出了基于虚拟队列的漂移加惩罚算法,该算法吞吐量最优,可任意达到最佳延迟,但不需要跨层通信。我们首先考虑了不带本地计算的 MEC 系统,随后扩展了我们的框架,将本地计算也纳入其中。我们证明,所提出的算法与基于 BP 的算法具有几乎相同的性能。这些算法包含可调整的参数,可在用户和 BS 的队列长度与卸载成本之间进行权衡。
{"title":"Optimal resource management for multi-access edge computing without using cross-layer communication","authors":"Ankita Koley,&nbsp;Chandramani Singh","doi":"10.1016/j.peva.2024.102445","DOIUrl":"10.1016/j.peva.2024.102445","url":null,"abstract":"<div><p>We consider a Multi-access Edge Computing (MEC) system with a set of users, a base station (BS) with an attached MEC server, and a cloud server. The users can serve the service requests locally or can offload them to the BS which in turn can serve a subset of the offloaded requests at the MEC and can forward the requests to the cloud server. The user devices and the MEC server can be dynamically configured to serve different classes of services. The service requests offloaded to the BS incur offloading costs and those forwarded to the cloud incur additional costs; the costs could represent service charges or delays. Aggregate cost minimization subject to stability warrants optimal service scheduling and offloading at the users and the MEC server, at their application layers, and optimal uplink packet scheduling at the users’ MAC layers. Classical back-pressure (BP) based solutions entail cross-layer message exchange, and hence are not viable. We propose virtual queue-based drift-plus-penalty algorithms that are throughput optimal, and achieve the optimal delay arbitrarily closely but do not require cross-layer communication. We first consider an MEC system without local computation, and subsequently, extend our framework to incorporate local computation also. We demonstrate that the proposed algorithms offer almost the same performance as BP based algorithms. These algorithms contain tuneable parameters that offer a trade off between queue lengths at the users and the BS and the offloading costs.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"166 ","pages":"Article 102445"},"PeriodicalIF":1.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial queues with nearest neighbour shifts 近邻移动的空间队列
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-11-01 Epub Date: 2024-10-16 DOI: 10.1016/j.peva.2024.102448
B.R. Vinay Kumar , Lasse Leskelä
This work studies queues in a Euclidean space. Consider N servers that are distributed uniformly in [0,1]d. Customers arrive at the servers according to independent stationary processes. Upon arrival, they probabilistically decide whether to join the queue they arrived at, or shift to one of the nearest neighbours. Such shifting strategies affect the load on the servers, and may cause some of the servers to become overloaded. We derive a law of large numbers and a central limit theorem for the fraction of overloaded servers in the system as the total number of servers N. Additionally, in the one-dimensional case (d=1), we evaluate the expected fraction of overloaded servers for any finite N. Numerical experiments are provided to support our theoretical results. Typical applications of the results include electric vehicles queueing at charging stations, and queues in airports or supermarkets.
本作品研究欧几里得空间中的队列。考虑均匀分布在 [0,1]d 中的 N 台服务器。客户根据独立的静态过程到达服务器。到达后,他们以概率方式决定是加入他们到达的队列,还是转移到最近的邻近队列中。这种转移策略会影响服务器的负载,并可能导致某些服务器超载。我们推导出服务器总数 N→∞ 时,系统中超载服务器比例的大数定律和中心极限定理。此外,在一维情况下(d=1),我们评估了任何有限 N 的预期超载服务器分数。这些结果的典型应用包括电动汽车在充电站排队、机场或超市排队等。
{"title":"Spatial queues with nearest neighbour shifts","authors":"B.R. Vinay Kumar ,&nbsp;Lasse Leskelä","doi":"10.1016/j.peva.2024.102448","DOIUrl":"10.1016/j.peva.2024.102448","url":null,"abstract":"<div><div>This work studies queues in a Euclidean space. Consider <span><math><mi>N</mi></math></span> servers that are distributed uniformly in <span><math><msup><mrow><mrow><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow></mrow><mrow><mi>d</mi></mrow></msup></math></span>. Customers arrive at the servers according to independent stationary processes. Upon arrival, they probabilistically decide whether to join the queue they arrived at, or shift to one of the nearest neighbours. Such shifting strategies affect the load on the servers, and may cause some of the servers to become overloaded. We derive a law of large numbers and a central limit theorem for the fraction of overloaded servers in the system as the total number of servers <span><math><mrow><mi>N</mi><mo>→</mo><mi>∞</mi></mrow></math></span>. Additionally, in the one-dimensional case (<span><math><mrow><mi>d</mi><mo>=</mo><mn>1</mn></mrow></math></span>), we evaluate the expected fraction of overloaded servers for any finite <span><math><mi>N</mi></math></span>. Numerical experiments are provided to support our theoretical results. Typical applications of the results include electric vehicles queueing at charging stations, and queues in airports or supermarkets.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"166 ","pages":"Article 102448"},"PeriodicalIF":1.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analyzing the age of information in prioritized status update systems under an interruption-based hybrid discipline 分析基于中断的混合纪律下优先级状态更新系统中的信息年龄
IF 2.2 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-01 Epub Date: 2024-04-15 DOI: 10.1016/j.peva.2024.102415
Tamer E. Fahim , Sherif I. Rabia , Ahmed H. Abd El-Malek , Waheed K. Zahra

Motivated by real-life applications, a special research-work interest has been recently directed towards the prioritized status update systems, which prioritize the update streams according to their timeliness constraints. The preferential service treatment between priority classes is commonly based on classical disciplines, preemption and non-preemption. However, both disciplines fail to give an even satisfaction between all classes. In our work, an interruption-based hybrid preemptive/non-preemptive discipline is proposed under a single-buffer system modeled as an M/M/1/2 priority queueing system. Each class being served (resp. buffered) can be preempted unless its recorded number of service preemptions reaches the predetermined in-service (resp. in-waiting) threshold. All thresholds between classes are the controlling parameters of the whole system’s performance. Using the stochastic hybrid system approach, the age of information (AoI) performance metric is analyzed in terms of its statistical average along with the higher-order moments, considering a general number of priority classes. Closed-form results are also obtained for some special cases, giving analytical insights about the AoI stability in heavy loading conditions. The average AoI and its dispersion are numerically investigated for the case of a three-class network. The significance of the proposed model is manifested in achieving a compromise satisfaction between all priority classes by a thorough adjustment of its threshold parameters. Two approaches are proposed to clarify the adjustment of these parameters. It turned out that the proposed hybrid discipline compensates for the limited buffer resource, achieving more promising performance with low design complexity and low cost. Moreover, the proposed scheme can operate under a wider span of the total offered load, through which the whole network satisfaction can be optimized under some legitimate constraints on the age-sensitive classes.

在现实应用的推动下,优先级状态更新系统成为近期研究工作的一个特别关注点,该系统根据更新流的及时性约束确定更新流的优先级。优先级之间的优先服务处理通常基于经典规则,即抢占和非抢占。然而,这两种规则都无法使所有类别的服务都得到均衡的满足。在我们的工作中,提出了一种基于中断的混合抢占/非抢占规则,该规则在单缓冲系统中被模拟为 M/M/1/2 优先级队列系统。每个被服务(或缓冲)的类都可以被抢占,除非其记录的服务抢占次数达到预定的服务中(或等待中)阈值。班级之间的所有阈值都是整个系统性能的控制参数。利用随机混合系统方法,在考虑到一般优先级数量的情况下,根据信息年龄(AoI)性能指标的统计平均值和高阶矩对其进行了分析。此外,还获得了一些特殊情况下的闭式结果,为重载条件下的 AoI 稳定性提供了分析见解。对三类网络的平均 AoI 及其离散性进行了数值研究。所提模型的重要性体现在通过彻底调整其阈值参数,实现所有优先级之间的折中满足。本文提出了两种方法来明确这些参数的调整。结果表明,所提出的混合纪律弥补了有限的缓冲资源,以较低的设计复杂度和较低的成本实现了更有前途的性能。此外,建议的方案可以在更大的总提供负载跨度下运行,通过这种方法,可以在对年龄敏感类的一些合法限制条件下优化整个网络的满意度。
{"title":"Analyzing the age of information in prioritized status update systems under an interruption-based hybrid discipline","authors":"Tamer E. Fahim ,&nbsp;Sherif I. Rabia ,&nbsp;Ahmed H. Abd El-Malek ,&nbsp;Waheed K. Zahra","doi":"10.1016/j.peva.2024.102415","DOIUrl":"https://doi.org/10.1016/j.peva.2024.102415","url":null,"abstract":"<div><p>Motivated by real-life applications, a special research-work interest has been recently directed towards the prioritized status update systems, which prioritize the update streams according to their timeliness constraints. The preferential service treatment between priority classes is commonly based on classical disciplines, preemption and non-preemption. However, both disciplines fail to give an even satisfaction between all classes. In our work, an interruption-based hybrid preemptive/non-preemptive discipline is proposed under a single-buffer system modeled as an M/M/1/2 priority queueing system. Each class being served (resp. buffered) can be preempted unless its recorded number of service preemptions reaches the predetermined in-service (resp. in-waiting) threshold. All thresholds between classes are the controlling parameters of the whole system’s performance. Using the stochastic hybrid system approach, the age of information (AoI) performance metric is analyzed in terms of its statistical average along with the higher-order moments, considering a general number of priority classes. Closed-form results are also obtained for some special cases, giving analytical insights about the AoI stability in heavy loading conditions. The average AoI and its dispersion are numerically investigated for the case of a three-class network. The significance of the proposed model is manifested in achieving a compromise satisfaction between all priority classes by a thorough adjustment of its threshold parameters. Two approaches are proposed to clarify the adjustment of these parameters. It turned out that the proposed hybrid discipline compensates for the limited buffer resource, achieving more promising performance with low design complexity and low cost. Moreover, the proposed scheme can operate under a wider span of the total offered load, through which the whole network satisfaction can be optimized under some legitimate constraints on the age-sensitive classes.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"165 ","pages":"Article 102415"},"PeriodicalIF":2.2,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140640916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inference latency prediction for CNNs on heterogeneous mobile devices and ML frameworks 异构移动设备和 ML 框架上 CNN 的推理延迟预测
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-01 Epub Date: 2024-07-14 DOI: 10.1016/j.peva.2024.102429
Zhuojin Li, Marco Paolieri, Leana Golubchik

Due to the proliferation of inference tasks on mobile devices, state-of-the-art neural architectures are typically designed using Neural Architecture Search (NAS) to achieve good tradeoffs between machine learning accuracy and inference latency. While measuring inference latency of a huge set of candidate architectures during NAS is not feasible, latency prediction for mobile devices is challenging, because of hardware heterogeneity, optimizations applied by machine learning frameworks, and diversity of neural architectures. Motivated by these challenges, we first quantitatively assess the characteristics of neural architectures (specifically, convolutional neural networks for image classification), ML frameworks, and mobile devices that have significant effects on inference latency. Based on this assessment, we propose an operation-wise framework which addresses these challenges by developing operation-wise latency predictors and achieves high accuracy in end-to-end latency predictions, as shown by our comprehensive evaluations on multiple mobile devices using multicore CPUs and GPUs. To illustrate that our approach does not require expensive data collection, we also show that accurate predictions can be achieved on real-world neural architectures using only small amounts of profiling data.

由于移动设备上推理任务的激增,最先进的神经架构通常采用神经架构搜索(NAS)来设计,以便在机器学习准确性和推理延迟之间实现良好的权衡。虽然在 NAS 期间测量大量候选架构的推理延迟并不可行,但由于硬件异构性、机器学习框架应用的优化以及神经架构的多样性,移动设备的延迟预测具有挑战性。在这些挑战的激励下,我们首先对神经架构(特别是用于图像分类的卷积神经网络)、机器学习框架和移动设备对推理延迟有显著影响的特性进行了定量评估。在此评估基础上,我们提出了一种操作型框架,通过开发操作型延迟预测器来应对这些挑战,并在使用多核 CPU 和 GPU 的多种移动设备上进行了全面评估,结果表明端到端延迟预测的准确性很高。为了说明我们的方法不需要昂贵的数据收集,我们还展示了仅使用少量剖析数据就能在真实世界的神经架构上实现准确预测。
{"title":"Inference latency prediction for CNNs on heterogeneous mobile devices and ML frameworks","authors":"Zhuojin Li,&nbsp;Marco Paolieri,&nbsp;Leana Golubchik","doi":"10.1016/j.peva.2024.102429","DOIUrl":"10.1016/j.peva.2024.102429","url":null,"abstract":"<div><p>Due to the proliferation of inference tasks on mobile devices, state-of-the-art neural architectures are typically designed using Neural Architecture Search (NAS) to achieve good tradeoffs between machine learning accuracy and inference latency. While measuring inference latency of a huge set of candidate architectures during NAS is not feasible, latency prediction for mobile devices is challenging, because of hardware heterogeneity, optimizations applied by machine learning frameworks, and diversity of neural architectures. Motivated by these challenges, we first quantitatively assess the characteristics of neural architectures (specifically, convolutional neural networks for image classification), ML frameworks, and mobile devices that have significant effects on inference latency. Based on this assessment, we propose an operation-wise framework which addresses these challenges by developing operation-wise latency predictors and achieves high accuracy in end-to-end latency predictions, as shown by our comprehensive evaluations on multiple mobile devices using multicore CPUs and GPUs. To illustrate that our approach does not require expensive data collection, we also show that accurate predictions can be achieved on real-world neural architectures using only small amounts of profiling data.</p></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"165 ","pages":"Article 102429"},"PeriodicalIF":1.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141714597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Performance Evaluation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1