首页 > 最新文献

Performance Evaluation最新文献

英文 中文
Near-optimal PCM wear leveling under adversarial attacks 在对抗性攻击下接近最佳的PCM磨损水平
IF 0.8 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-01 Epub Date: 2025-11-05 DOI: 10.1016/j.peva.2025.102522
Tomer Lange, Joseph (Seffi) Naor, Gala Yadgar
Phase-change memory (PCM) is a promising memory technology known for its speed, high density, and durability. However, each PCM cell can endure only a limited number of erase and subsequent write operations before failing, and the failure of a single cell can limit the lifespan of the entire device. This vulnerability makes PCM particularly susceptible to adversarial attacks that induce excessive writes to accelerate device failure. To counter this, wear-leveling techniques aim to distribute write operations evenly across PCM cells.
In this paper, we study the online PCM utilization problem, which seeks to maximize the number of write requests served before any cell reaches the erase limit. While extensively studied in the systems and architecture communities, this problem remains largely unexplored from a theoretical perspective. We bridge this gap by presenting a novel algorithm that leverages hardware feedback to optimize PCM utilization. We prove that our algorithm achieves near-optimal worst-case guarantees and outperforms state-of-the-art practical solutions both theoretically and empirically, providing an efficient approach to prolonging PCM lifespan.
相变存储器(PCM)是一种很有前途的存储技术,以其速度快、高密度和耐用性而闻名。然而,每个PCM单元在失效之前只能承受有限数量的擦除和随后的写入操作,并且单个单元的失效可能会限制整个设备的使用寿命。这个漏洞使PCM特别容易受到对抗性攻击的影响,这种攻击会导致过度写入,从而加速设备故障。为了解决这个问题,损耗均衡技术的目标是在PCM单元之间均匀地分配写操作。在本文中,我们研究在线PCM利用率问题,该问题寻求在任何单元达到擦除限制之前服务的写请求数量最大化。虽然在系统和架构社区中进行了广泛的研究,但从理论的角度来看,这个问题在很大程度上仍未得到探索。我们通过提出一种利用硬件反馈优化PCM利用率的新算法来弥合这一差距。我们证明了我们的算法实现了近乎最优的最坏情况保证,并且在理论和经验上都优于最先进的实际解决方案,为延长PCM的使用寿命提供了有效的方法。
{"title":"Near-optimal PCM wear leveling under adversarial attacks","authors":"Tomer Lange,&nbsp;Joseph (Seffi) Naor,&nbsp;Gala Yadgar","doi":"10.1016/j.peva.2025.102522","DOIUrl":"10.1016/j.peva.2025.102522","url":null,"abstract":"<div><div>Phase-change memory (PCM) is a promising memory technology known for its speed, high density, and durability. However, each PCM cell can endure only a limited number of erase and subsequent write operations before failing, and the failure of a single cell can limit the lifespan of the entire device. This vulnerability makes PCM particularly susceptible to adversarial attacks that induce excessive writes to accelerate device failure. To counter this, wear-leveling techniques aim to distribute write operations evenly across PCM cells.</div><div>In this paper, we study the <em>online PCM utilization problem</em>, which seeks to maximize the number of write requests served before any cell reaches the erase limit. While extensively studied in the systems and architecture communities, this problem remains largely unexplored from a theoretical perspective. We bridge this gap by presenting a novel algorithm that leverages hardware feedback to optimize PCM utilization. We prove that our algorithm achieves near-optimal worst-case guarantees and outperforms state-of-the-art practical solutions both theoretically and empirically, providing an efficient approach to prolonging PCM lifespan.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"170 ","pages":"Article 102522"},"PeriodicalIF":0.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145516996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
γ-CounterBoost: Optimizing response time tails using job type information only γ-CounterBoost:仅使用作业类型信息优化响应时间尾部
IF 0.8 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-01 Epub Date: 2025-11-06 DOI: 10.1016/j.peva.2025.102514
Nils Charlet, Benny Van Houdt
In a recent paper the γ-Boost scheduling policy was shown to minimize the tail of the response time distribution in a light-tailed M/G/1-queue. This policy schedules jobs using a boosted arrival time, defined as the arrival time of a job minus its boost, where the boost of a job depends on its exact job size. The γ-Boost policy can also be used when only partial job size information is available, such as the type of an incoming job. In such case the boost bi of a job depends solely on its type i and γ-Boost was shown to optimize the tail among all boost policies, where a boost policy is fully determined by the bi values. In the partial information setting γ-Boost relies on two types of information: job types and arrival times.
This paper focuses on the problem of minimizing the tail in a light-tailed M/G/1-queue in the partial job size information setting when the scheduler only makes use of the job types and does not exploit arrival times. Prior work showed that in case of 2 job types the so-called Nudge-M policy minimizes the tail in a large class of scheduling policies. In this paper we introduce the γ-CounterBoost policy in the partial information setting with d2 job types and prove that it minimizes the tail in an even broader class of scheduling policies called Contextual CounterBoost policies. The γ-CounterBoost policy reduces to the Nudge-M policy in case of d=2 job types.
在最近的一篇论文中,证明了γ-Boost调度策略可以最小化轻尾M/G/1队列响应时间分布的尾部。此策略使用增强的到达时间调度作业,该时间定义为作业的到达时间减去其boost,其中作业的boost取决于其确切的作业大小。当只有部分作业大小信息可用时,例如传入作业的类型,也可以使用γ-Boost策略。在这种情况下,作业的boost bi仅取决于其类型i,并且γ-Boost被证明可以优化所有boost策略中的尾部,其中boost策略完全由bi值决定。在部分信息设置中,γ-Boost依赖于两种类型的信息:作业类型和到达时间。本文研究了在部分作业大小信息设置下,调度程序仅利用作业类型而不利用到达时间,最小化轻尾M/G/1队列尾部的问题。先前的研究表明,在两种作业类型的情况下,所谓的Nudge-M策略在一大类调度策略中最小化了尾部。本文在d≥2个作业类型的部分信息设置中引入了γ-CounterBoost策略,并证明了它在更广泛的调度策略(称为上下文CounterBoost策略)中最小化了尾部。在d=2个作业类型的情况下,γ-CounterBoost策略减少为Nudge-M策略。
{"title":"γ-CounterBoost: Optimizing response time tails using job type information only","authors":"Nils Charlet,&nbsp;Benny Van Houdt","doi":"10.1016/j.peva.2025.102514","DOIUrl":"10.1016/j.peva.2025.102514","url":null,"abstract":"<div><div>In a recent paper the <span><math><mi>γ</mi></math></span>-Boost scheduling policy was shown to minimize the tail of the response time distribution in a light-tailed M/G/1-queue. This policy schedules jobs using a boosted arrival time, defined as the arrival time of a job minus its boost, where the boost of a job depends on its exact job size. The <span><math><mi>γ</mi></math></span>-Boost policy can also be used when only partial job size information is available, such as the type of an incoming job. In such case the boost <span><math><msub><mrow><mi>b</mi></mrow><mrow><mi>i</mi></mrow></msub></math></span> of a job depends solely on its type <span><math><mi>i</mi></math></span> and <span><math><mi>γ</mi></math></span>-Boost was shown to optimize the tail among all boost policies, where a boost policy is fully determined by the <span><math><msub><mrow><mi>b</mi></mrow><mrow><mi>i</mi></mrow></msub></math></span> values. In the partial information setting <span><math><mi>γ</mi></math></span>-Boost relies on two types of information: job types and arrival times.</div><div>This paper focuses on the problem of minimizing the tail in a light-tailed M/G/1-queue in the partial job size information setting when the scheduler only makes use of the job types and <em>does not exploit arrival times</em>. Prior work showed that in case of 2 job types the so-called Nudge-<span><math><mi>M</mi></math></span> policy minimizes the tail in a large class of scheduling policies. In this paper we introduce the <span><math><mi>γ</mi></math></span>-CounterBoost policy in the partial information setting with <span><math><mrow><mi>d</mi><mo>≥</mo><mn>2</mn></mrow></math></span> job types and prove that it minimizes the tail in an even broader class of scheduling policies called Contextual CounterBoost policies. The <span><math><mi>γ</mi></math></span>-CounterBoost policy reduces to the Nudge-<span><math><mi>M</mi></math></span> policy in case of <span><math><mrow><mi>d</mi><mo>=</mo><mn>2</mn></mrow></math></span> job types.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"170 ","pages":"Article 102514"},"PeriodicalIF":0.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145517053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Response time in a pair of processor sharing queues with Join-the-Shortest-Queue scheduling 具有最短队列加入调度的一对处理器共享队列中的响应时间
IF 0.8 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-01 Epub Date: 2025-09-02 DOI: 10.1016/j.peva.2025.102509
Julianna Bor, Peter G. Harrison
Join-the-Shortest-Queue (JSQ) is the scheduling policy of choice for many network providers, cloud servers, and traffic management systems, where individual queues are served under the processor sharing (PS) queueing discipline. A numerical solution for the response time distribution in two parallel PS queues with JSQ scheduling is derived for the first time. Using the generating function method, two partial differential equations (PDEs) are obtained corresponding to conditional response times, where the conditioning is on a particular traced task joining the first or the second queue. These PDEs are functional equations that contain partial generating functions and their partial derivatives, and therefore cannot be solved by commonly used techniques. We are able to solve these PDEs numerically with good accuracy and perform the deconditioning with respect to the queue-length probabilities by evaluating a certain complex integral. Numerical results for the density and the first four moments compare well against regenerative simulation.
加入最短队列(join -the- short - queue, JSQ)是许多网络提供商、云服务器和流量管理系统所选择的调度策略,其中单个队列在处理器共享(processor sharing, PS)队列规则下提供服务。本文首次给出了采用JSQ调度的两个并行PS队列响应时间分布的数值解。利用生成函数方法,得到了与条件响应时间相对应的两个偏微分方程(PDEs),其中条件响应是针对加入第一个或第二个队列的特定跟踪任务。这些偏微分方程是包含偏生成函数及其偏导数的函数方程,因此不能用常用的技术来求解。我们能够以较好的精度对这些偏微分方程进行数值求解,并通过求某个复积分对队列长度概率进行去条件化。密度和前4阶矩的数值模拟结果与再生模拟结果比较吻合。
{"title":"Response time in a pair of processor sharing queues with Join-the-Shortest-Queue scheduling","authors":"Julianna Bor,&nbsp;Peter G. Harrison","doi":"10.1016/j.peva.2025.102509","DOIUrl":"10.1016/j.peva.2025.102509","url":null,"abstract":"<div><div>Join-the-Shortest-Queue (JSQ) is the scheduling policy of choice for many network providers, cloud servers, and traffic management systems, where individual queues are served under the processor sharing (PS) queueing discipline. A numerical solution for the response time distribution in two parallel PS queues with JSQ scheduling is derived for the first time. Using the generating function method, two partial differential equations (PDEs) are obtained corresponding to conditional response times, where the conditioning is on a particular traced task joining the first or the second queue. These PDEs are functional equations that contain partial generating functions and their partial derivatives, and therefore cannot be solved by commonly used techniques. We are able to solve these PDEs numerically with good accuracy and perform the deconditioning with respect to the queue-length probabilities by evaluating a certain complex integral. Numerical results for the density and the first four moments compare well against regenerative simulation.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"170 ","pages":"Article 102509"},"PeriodicalIF":0.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145099156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TiFSN: A wavelet-EC-TCN model for quadrotor UAV trajectory prediction based on time–frequency–spatial feature fusion 基于时频空特征融合的四旋翼无人机轨迹预测小波- ec - tcn模型
IF 0.8 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-01 Epub Date: 2025-11-03 DOI: 10.1016/j.peva.2025.102515
Huan Zhao , Yong Kou , Yuxin Xue , Shuang Wang , Zhaojun Gu
Flight trajectory prediction (FTP) with high precision is the core technology for the autonomous flight of quadrotor unmanned aerial vehicles (UAVs) in environments with limited navigation signals. In response to the problem that most existing methods focus on the features of a single domain and ignore the cross-domain feature correlation, making it challenging to maintain high accuracy in FTP, a prediction model based on time–frequency–spatial feature fusion named TiFSN is proposed. Firstly, based on wavelet transform technology, the velocity signal is extended to time–frequency joint features. Furthermore, a fusion mechanism between time–frequency domain features and attitude angles is established, so that a multi-domain feature set with time–frequency–spatial perception can be constructed. Finally, an extended channels-based temporal convolutional network (EC-TCN) is designed, which achieves high-precision FTP by expanding the feature receiving field. Experiments were conducted on real flight datasets, and the results show that the model significantly improved the evaluation metrics compared to baseline methods. The generalization test of various complex FTP tasks using the onboard CPU also verified the excellent performance of the TiFSN. The ablation experiment further revealed the influence of wavelet decomposition depth and the strategy of expanded channels on the performance.
高精度飞行轨迹预测是四旋翼无人机在有限导航信号环境下自主飞行的核心技术。针对现有方法大多关注单域特征而忽略跨域特征相关性,难以保持FTP高精度的问题,提出了一种基于时频空特征融合的预测模型TiFSN。首先,基于小波变换技术,将速度信号扩展为时频联合特征;建立了时频域特征与姿态角的融合机制,构建了具有时频空间感知的多域特征集。最后,设计了一种基于扩展通道的时间卷积网络(EC-TCN),通过扩展特征接收场来实现高精度FTP。在真实飞行数据集上进行了实验,结果表明,与基线方法相比,该模型显著提高了评估指标。利用板载CPU对各种复杂FTP任务的泛化测试也验证了TiFSN的优异性能。烧蚀实验进一步揭示了小波分解深度和扩展通道策略对性能的影响。
{"title":"TiFSN: A wavelet-EC-TCN model for quadrotor UAV trajectory prediction based on time–frequency–spatial feature fusion","authors":"Huan Zhao ,&nbsp;Yong Kou ,&nbsp;Yuxin Xue ,&nbsp;Shuang Wang ,&nbsp;Zhaojun Gu","doi":"10.1016/j.peva.2025.102515","DOIUrl":"10.1016/j.peva.2025.102515","url":null,"abstract":"<div><div>Flight trajectory prediction (FTP) with high precision is the core technology for the autonomous flight of quadrotor unmanned aerial vehicles (UAVs) in environments with limited navigation signals. In response to the problem that most existing methods focus on the features of a single domain and ignore the cross-domain feature correlation, making it challenging to maintain high accuracy in FTP, a prediction model based on time–frequency–spatial feature fusion named TiFSN is proposed. Firstly, based on wavelet transform technology, the velocity signal is extended to time–frequency joint features. Furthermore, a fusion mechanism between time–frequency domain features and attitude angles is established, so that a multi-domain feature set with time–frequency–spatial perception can be constructed. Finally, an extended channels-based temporal convolutional network (EC-TCN) is designed, which achieves high-precision FTP by expanding the feature receiving field. Experiments were conducted on real flight datasets, and the results show that the model significantly improved the evaluation metrics compared to baseline methods. The generalization test of various complex FTP tasks using the onboard CPU also verified the excellent performance of the TiFSN. The ablation experiment further revealed the influence of wavelet decomposition depth and the strategy of expanded channels on the performance.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"170 ","pages":"Article 102515"},"PeriodicalIF":0.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145466726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
swPredictor: A data-driven performance model for distributed data parallelism training on large-scale HPC clusters swPredictor:一个数据驱动的性能模型,用于大规模高性能计算集群上的分布式数据并行性训练
IF 0.8 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-01 Epub Date: 2025-11-11 DOI: 10.1016/j.peva.2025.102530
Xianyu Zhu , Ruohan Wu , Junshi Chen , Hong An
Given the complexity of heterogeneous architectures and multi-node collaboration, large-scale HPC (High-Performance Computing) clusters pose challenges in resource utilization and performance optimization during distributed data parallelism (DDP) training. Performance modeling aims to identify application bottlenecks and guide algorithm design, but existing performance models rarely consider the impact of system architecture on communication performance or provide a systematic analysis of distributed training. To address these issues, this paper proposes swPredictor, a data-driven performance model devised for accurately predicting the performance of DDP training. First, an original performance dataset is developed based on various communication patterns at runtime to avoid systematic errors. Subsequently, a novel multi-branch module FNO-Inception is proposed, combining FNO (Fourier Neural Operator) layer with Inception structure to simultaneously utilize various frequency features. Finally, by introducing the FNO-Inception module, a novel regression model FI-Net is constructed to fit complex nonlinear relationships. The experimental results demonstrate that FI-Net can accurately predict the performance of DDP training on the Sunway OceanLight supercomputer with an overall MAPE of 0.93%, which outperforms the other baseline models.
考虑到异构架构和多节点协作的复杂性,大规模高性能计算集群在分布式数据并行(DDP)训练过程中对资源利用和性能优化提出了挑战。性能建模的目的是识别应用瓶颈,指导算法设计,但现有的性能模型很少考虑系统架构对通信性能的影响,也很少对分布式训练进行系统分析。为了解决这些问题,本文提出了swPredictor,这是一个数据驱动的性能模型,旨在准确预测DDP训练的性能。首先,在运行时基于各种通信模式开发原始性能数据集,以避免系统错误。随后,提出了一种新的多分支模块FNO-Inception,将FNO(傅里叶神经算子)层与Inception结构相结合,同时利用各种频率特征。最后,通过引入FNO-Inception模块,构造了一个新的拟合复杂非线性关系的回归模型FI-Net。实验结果表明,FI-Net在神威海洋之光超级计算机上能够准确预测DDP训练的性能,总体MAPE为0.93%,优于其他基准模型。
{"title":"swPredictor: A data-driven performance model for distributed data parallelism training on large-scale HPC clusters","authors":"Xianyu Zhu ,&nbsp;Ruohan Wu ,&nbsp;Junshi Chen ,&nbsp;Hong An","doi":"10.1016/j.peva.2025.102530","DOIUrl":"10.1016/j.peva.2025.102530","url":null,"abstract":"<div><div>Given the complexity of heterogeneous architectures and multi-node collaboration, large-scale HPC (High-Performance Computing) clusters pose challenges in resource utilization and performance optimization during distributed data parallelism (DDP) training. Performance modeling aims to identify application bottlenecks and guide algorithm design, but existing performance models rarely consider the impact of system architecture on communication performance or provide a systematic analysis of distributed training. To address these issues, this paper proposes swPredictor, a data-driven performance model devised for accurately predicting the performance of DDP training. First, an original performance dataset is developed based on various communication patterns at runtime to avoid systematic errors. Subsequently, a novel multi-branch module FNO-Inception is proposed, combining FNO (Fourier Neural Operator) layer with Inception structure to simultaneously utilize various frequency features. Finally, by introducing the FNO-Inception module, a novel regression model FI-Net is constructed to fit complex nonlinear relationships. The experimental results demonstrate that FI-Net can accurately predict the performance of DDP training on the Sunway OceanLight supercomputer with an overall MAPE of 0.93%, which outperforms the other baseline models.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"170 ","pages":"Article 102530"},"PeriodicalIF":0.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145516999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mitigating massive access with Quasi-Deterministic Transmission: Experiments and stationary analysis 用准确定性传输减少大规模访问:实验和平稳分析
IF 0.8 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-01 Epub Date: 2025-11-03 DOI: 10.1016/j.peva.2025.102512
Jacob Bergquist , Erol Gelenbe , Mohammed Nasereddin , Karl Sigman
The Massive Access Problem arises due to devices that forward packets simultaneously to servers in rapid succession, or by malevolent software in devices that flood network nodes with high-intensity traffic. To protect servers from such events, attack detection (AD) software is installed on servers, and the Quasi-Deterministic Transmission Policy (QDTP) has been proposed to “shape traffic” and protect servers, allowing attack detection to proceed in a timely fashion by delaying some of the incoming packets individually based on their arrival times. QDTP does not cause packet loss, and can be designed so that it does not increase end-to-end packet delay. Starting with measurements taken on an experimental test-bed where the QDPT algorithm is installed on a dedicated processor, which precedes the server itself, we show that QDPT protects the server from attacks by accumulating arriving packets at the input of the QDTP processor, then forwarding them at regular intervals to the server. We compare the behaviour of the server, with and without the use of QDTP, showing the improvement it achieves, provided that its “delay” parameter is correctly selected. We analyze the sample paths associated with QDTP and prove that when its delay parameter is chosen in a specific manner, the end-to-end delay of each packet remains unchanged as compared to an ordinary First-In-First-Out system. An approach based on stationary ergodic processes is developed for the stability conditions. Assuming mutually independent and identically distributed inter-arrival times, service times and QDTP delays, we exhibit the positive recurrent structure of a two-dimensional Markov process and its regeneration points.
大量访问问题是由于设备同时快速连续地向服务器转发数据包,或者设备中的恶意软件以高强度流量淹没网络节点而产生的。为了保护服务器免受此类事件的影响,在服务器上安装了攻击检测(AD)软件,并提出了准确定性传输策略(QDTP)来“塑造流量”和保护服务器,允许攻击检测及时进行,根据它们的到达时间分别延迟一些传入数据包。QDTP不会导致数据包丢失,并且可以设计成不增加端到端数据包延迟。从在实验测试台上进行的测量开始,其中QDPT算法安装在专用处理器上,该处理器先于服务器本身,我们展示了QDPT通过在QDTP处理器的输入处积累到达的数据包,然后定期将它们转发到服务器,从而保护服务器免受攻击。我们比较了服务器的行为,在使用和不使用QDTP的情况下,显示了它所实现的改进,前提是它的“延迟”参数被正确选择。我们分析了与QDTP相关的样本路径,并证明当以特定的方式选择其延迟参数时,与普通的先入先出系统相比,每个数据包的端到端延迟保持不变。针对稳定性条件,提出了一种基于平稳遍历过程的方法。假设到达间隔时间、服务时间和QDTP延迟相互独立且分布相同,我们展示了二维马尔可夫过程及其再生点的正循环结构。
{"title":"Mitigating massive access with Quasi-Deterministic Transmission: Experiments and stationary analysis","authors":"Jacob Bergquist ,&nbsp;Erol Gelenbe ,&nbsp;Mohammed Nasereddin ,&nbsp;Karl Sigman","doi":"10.1016/j.peva.2025.102512","DOIUrl":"10.1016/j.peva.2025.102512","url":null,"abstract":"<div><div>The Massive Access Problem arises due to devices that forward packets simultaneously to servers in rapid succession, or by malevolent software in devices that flood network nodes with high-intensity traffic. To protect servers from such events, attack detection (AD) software is installed on servers, and the Quasi-Deterministic Transmission Policy (QDTP) has been proposed to “shape traffic” and protect servers, allowing attack detection to proceed in a timely fashion by delaying some of the incoming packets individually based on their arrival times. QDTP does not cause packet loss, and can be designed so that it does not increase end-to-end packet delay. Starting with measurements taken on an experimental test-bed where the QDPT algorithm is installed on a dedicated processor, which precedes the server itself, we show that QDPT protects the server from attacks by accumulating arriving packets at the input of the QDTP processor, then forwarding them at regular intervals to the server. We compare the behaviour of the server, with and without the use of QDTP, showing the improvement it achieves, provided that its “delay” parameter is correctly selected. We analyze the sample paths associated with QDTP and prove that when its delay parameter is chosen in a specific manner, the end-to-end delay of each packet remains unchanged as compared to an ordinary First-In-First-Out system. An approach based on stationary ergodic processes is developed for the stability conditions. Assuming mutually independent and identically distributed inter-arrival times, service times and QDTP delays, we exhibit the positive recurrent structure of a two-dimensional Markov process and its regeneration points.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"170 ","pages":"Article 102512"},"PeriodicalIF":0.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145466621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CommonSense: Efficient Set Intersection (SetX) protocol based on compressed sensing CommonSense:基于压缩感知的高效集交集(SetX)协议
IF 0.8 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-01 Epub Date: 2025-11-04 DOI: 10.1016/j.peva.2025.102520
Jingfan Meng, Tianji Yang, Jun Xu
Set reconciliation (SetR) is an important research problem that has been studied for over two decades. In this problem, two large sets A and B of objects (tokens, files, records, etc.) are stored respectively at two different network-connected hosts, which we name Alice and Bob respectively. Alice and Bob need to communicate with each other to learn the set union AB (which then becomes their reconciled state), at low communication and computation costs. In this work, we study a different problem intricately related to SetR: Alice and Bob collaboratively compute AB. We call this problem SetX (set intersection). Although SetX is just as important as SetR, it has never been properly studied in its own right. Rather, there is an unspoken perception by the research community that SetR and SetX are equally difficult (in costs), and hence “roughly equivalent.” Our first contribution is to show that SetX is fundamentally a much “cheaper” problem than SetR, debunking this long-standing perception. Our second contribution is to develop a novel SetX solution, the communication cost of which handily beats the information-theoretic lower bound of SetR. This protocol is based on the idea of compressed sensing (CS), which we describe here only for the special case of AB (We do have a more sophisticated protocol for the general case). Our protocol is for Alice to encode A into a CS sketch M1A and send it to Bob, where M is a CS matrix with l rows and 1A is the binary vector representation of A. Our key innovation here is to make l (the sketch size) just large enough (for the sketch) to summarize BA (what Alice misses). In contrast, in existing protocols l needs to be large enough to summarize A (what Alice knows), which is typically much larger in cardinality. Our third contribution is to design a CS matrix M that is both “friendly” to (the performance of) applications and “compliant” with CS theory.
集调和(SetR)是一个重要的研究问题,已经被研究了二十多年。在这个问题中,对象(令牌,文件,记录等)的两个大集合A和B分别存储在两个不同的网络连接的主机上,我们分别将其命名为Alice和Bob。Alice和Bob需要相互通信以学习集合并集A∪B(然后成为他们的协调状态),通信和计算成本很低。在这项工作中,我们研究了一个与SetR复杂相关的不同问题:Alice和Bob协同计算a∩B。我们称这个问题为SetX(集合交集)。尽管SetX和SetR一样重要,但它本身从未得到过适当的研究。相反,研究界有一种不言而喻的看法,即SetR和SetX同样困难(在成本上),因此“大致相当”。我们的第一个贡献是表明SetX从根本上来说是一个比SetR“便宜”得多的问题,揭穿了这种长期存在的看法。我们的第二个贡献是开发了一种新的SetX解决方案,其通信成本轻松地超过了SetR的信息论下界。该协议基于压缩感知(CS)的思想,我们在此仅对A≥B的特殊情况进行描述(对于一般情况,我们确实有更复杂的协议)。我们的协议是让Alice将A编码为CS草图M1A并将其发送给Bob,其中M是一个有l行的CS矩阵,1A是A的二进制向量表示。我们这里的关键创新是使l(草图大小)足够大(对于草图)来总结B∈A (Alice遗漏的内容)。相比之下,在现有协议中,l需要足够大来总结A (Alice所知道的),而A的基数通常要大得多。我们的第三个贡献是设计一个CS矩阵M,它对应用程序(性能)既“友好”,又“符合”CS理论。
{"title":"CommonSense: Efficient Set Intersection (SetX) protocol based on compressed sensing","authors":"Jingfan Meng,&nbsp;Tianji Yang,&nbsp;Jun Xu","doi":"10.1016/j.peva.2025.102520","DOIUrl":"10.1016/j.peva.2025.102520","url":null,"abstract":"<div><div>Set reconciliation (SetR) is an important research problem that has been studied for over two decades. In this problem, two large sets <span><math><mi>A</mi></math></span> and <span><math><mi>B</mi></math></span> of objects (tokens, files, records, etc.) are stored respectively at two different network-connected hosts, which we name Alice and Bob respectively. Alice and Bob need to communicate with each other to learn the set union <span><math><mrow><mi>A</mi><mo>∪</mo><mi>B</mi></mrow></math></span> (which then becomes their reconciled state), at low communication and computation costs. In this work, we study a different problem intricately related to SetR: Alice and Bob collaboratively compute <span><math><mrow><mi>A</mi><mo>∩</mo><mi>B</mi></mrow></math></span>. We call this problem SetX (set intersection). Although SetX is just as important as SetR, it has never been properly studied in its own right. Rather, there is an unspoken perception by the research community that SetR and SetX are equally difficult (in costs), and hence “roughly equivalent.” Our first contribution is to show that SetX is fundamentally a much “cheaper” problem than SetR, debunking this long-standing perception. Our second contribution is to develop a novel SetX solution, the communication cost of which handily beats the information-theoretic lower bound of SetR. This protocol is based on the idea of compressed sensing (CS), which we describe here only for the special case of <span><math><mrow><mi>A</mi><mo>⊆</mo><mi>B</mi></mrow></math></span> (We do have a more sophisticated protocol for the general case). Our protocol is for Alice to encode <span><math><mi>A</mi></math></span> into a CS sketch <span><math><mrow><mi>M</mi><msub><mrow><mi>1</mi></mrow><mrow><mi>A</mi></mrow></msub></mrow></math></span> and send it to Bob, where <span><math><mi>M</mi></math></span> is a CS matrix with <span><math><mi>l</mi></math></span> rows and <span><math><msub><mrow><mi>1</mi></mrow><mrow><mi>A</mi></mrow></msub></math></span> is the binary vector representation of <span><math><mi>A</mi></math></span>. Our key innovation here is to make <span><math><mi>l</mi></math></span> (the sketch size) just large enough (for the sketch) to summarize <span><math><mrow><mi>B</mi><mo>∖</mo><mi>A</mi></mrow></math></span> (what Alice misses). In contrast, in existing protocols <span><math><mi>l</mi></math></span> needs to be large enough to summarize <span><math><mi>A</mi></math></span> (what Alice knows), which is typically much larger in cardinality. Our third contribution is to design a CS matrix <span><math><mi>M</mi></math></span> that is both “friendly” to (the performance of) applications and “compliant” with CS theory.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"170 ","pages":"Article 102520"},"PeriodicalIF":0.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145517051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can attacks reduce Age of Information? 攻击能减少信息时代吗?
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-01 Epub Date: 2025-06-02 DOI: 10.1016/j.peva.2025.102498
Josu Doncel , Mohamad Assaad
We study a monitoring system in which a single source sends status updates to a monitor through a communication channel. The communication channel is modeled as a queueing system, and we assume that attacks occur following a random process. When an attack occurs, all packets in the queueing system are discarded. While one might expect attacks to always negatively impact system performance, we demonstrate in this paper that, from the perspective of Age of Information (AoI), attacks can in some cases reduce the AoI. Our objective is to identify the conditions under which AoI is reduced and to determine the attack rate that minimizes or reduces AoI. First, we analyze single and tandem M/M/1/1 queues with preemption and show that attacks cannot reduce AoI in these cases. Next, we examine a single M/M/1/1 queue without preemption and establish necessary and sufficient conditions for the existence of an attack rate that minimizes AoI. For this scenario, we also derive an upper bound for the optimal attack rate and prove that it becomes tight when the arrival rate of updates is very high. Through numerical experiments, we observe that attacks can reduce AoI in tandem M/M/1/1 queues without preemption, as well as in preemptive M/M/1/2 and M/M/1/3 queues. Furthermore, we show that the benefit of attacks on AoI increases with the buffer size.
我们研究了一个监控系统,其中单个源通过通信通道向监视器发送状态更新。通信通道被建模为一个排队系统,我们假设攻击发生在一个随机的过程中。当攻击发生时,队列系统中的所有报文将被丢弃。虽然人们可能期望攻击总是对系统性能产生负面影响,但我们在本文中证明,从信息时代(Age of Information, AoI)的角度来看,攻击在某些情况下可以降低AoI。我们的目标是确定减少AoI的条件,并确定最小化或减少AoI的攻击率。首先,我们分析了具有抢占的单队列和串联M/M/1/1队列,并表明在这些情况下攻击不能降低AoI。接下来,我们研究了一个没有抢占的M/M/1/1队列,并建立了使AoI最小的攻击率存在的充分必要条件。对于这种情况,我们还推导了最优攻击率的上界,并证明当更新到达率非常高时,它会变得很紧。通过数值实验,我们观察到攻击可以减少无抢占的M/M/1/1队列中的AoI,以及抢占的M/M/1/2和M/M/1/3队列中的AoI。此外,我们还表明,攻击AoI的好处随着缓冲区大小的增加而增加。
{"title":"Can attacks reduce Age of Information?","authors":"Josu Doncel ,&nbsp;Mohamad Assaad","doi":"10.1016/j.peva.2025.102498","DOIUrl":"10.1016/j.peva.2025.102498","url":null,"abstract":"<div><div>We study a monitoring system in which a single source sends status updates to a monitor through a communication channel. The communication channel is modeled as a queueing system, and we assume that attacks occur following a random process. When an attack occurs, all packets in the queueing system are discarded. While one might expect attacks to always negatively impact system performance, we demonstrate in this paper that, from the perspective of Age of Information (AoI), attacks can in some cases reduce the AoI. Our objective is to identify the conditions under which AoI is reduced and to determine the attack rate that minimizes or reduces AoI. First, we analyze single and tandem M/M/1/1 queues with preemption and show that attacks cannot reduce AoI in these cases. Next, we examine a single M/M/1/1 queue without preemption and establish necessary and sufficient conditions for the existence of an attack rate that minimizes AoI. For this scenario, we also derive an upper bound for the optimal attack rate and prove that it becomes tight when the arrival rate of updates is very high. Through numerical experiments, we observe that attacks can reduce AoI in tandem M/M/1/1 queues without preemption, as well as in preemptive M/M/1/2 and M/M/1/3 queues. Furthermore, we show that the benefit of attacks on AoI increases with the buffer size.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"169 ","pages":"Article 102498"},"PeriodicalIF":1.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144196352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliability evaluation of tape library systems 磁带库系统可靠性评估
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-01 Epub Date: 2025-06-20 DOI: 10.1016/j.peva.2025.102501
Ilias Iliadis, Mark Lantz
Magnetic tape is a digital data storage technology that has evolved continuously over the last seven decades. It provides a cost-effective way to retain the rapidly increasing volumes of data being created in recent years. The low cost per terabyte combined with tape’s low energy consumption make it an appealing option for storing infrequently accessed data and has resulted in a resurgence in use of the technology. Power and operational failures may damage tapes and lead to data loss. To protect stored data against loss and achieve high data reliability, an erasure coding scheme is employed. A theoretical model capturing the effect of tape failures and latent errors on system reliability is developed. Closed-form expressions are derived for the Mean Time to Data Loss (MTTDL) and the Expected Annual Fraction of Effective Data Loss (EAFEDL) reliability metric, which assesses losses at the file, object, or block, level. The results obtained demonstrate that, for realistic values of bit error rates, reliability is affected by the presence of latent errors. The effect of system parameters on reliability is assessed by conducting a sensitivity evaluation. The reliability improvement achieved by employing erasure coding schemes with increased capability is demonstrated. The theoretical results derived can be used to dimension and provision tape libraries to provide desired levels of data durability.
磁带是一种数字数据存储技术,在过去七十年中不断发展。它提供了一种经济有效的方式来保留近年来创建的快速增长的数据量。每太字节的低成本加上磁带的低能耗,使其成为存储不经常访问的数据的一个有吸引力的选择,并导致了该技术使用的复苏。如果电源或操作异常,可能会导致磁带损坏、数据丢失。为了保护存储的数据不丢失和提高数据的可靠性,采用了erasure编码方案。建立了一个理论模型,描述了磁带故障和潜在错误对系统可靠性的影响。导出了平均数据丢失时间(MTTDL)和有效数据丢失的预期年分数(EAFEDL)可靠性度量的封闭表达式,它们评估文件、对象或块级别的损失。结果表明,对于误码率的实际值,潜在错误的存在会影响可靠性。通过灵敏度评估来评估系统参数对可靠性的影响。证明了采用提高容量的擦除编码方案可以提高系统的可靠性。导出的理论结果可用于定义和提供磁带库,以提供所需的数据持久性水平。
{"title":"Reliability evaluation of tape library systems","authors":"Ilias Iliadis,&nbsp;Mark Lantz","doi":"10.1016/j.peva.2025.102501","DOIUrl":"10.1016/j.peva.2025.102501","url":null,"abstract":"<div><div>Magnetic tape is a digital data storage technology that has evolved continuously over the last seven decades. It provides a cost-effective way to retain the rapidly increasing volumes of data being created in recent years. The low cost per terabyte combined with tape’s low energy consumption make it an appealing option for storing infrequently accessed data and has resulted in a resurgence in use of the technology. Power and operational failures may damage tapes and lead to data loss. To protect stored data against loss and achieve high data reliability, an erasure coding scheme is employed. A theoretical model capturing the effect of tape failures and latent errors on system reliability is developed. Closed-form expressions are derived for the Mean Time to Data Loss (<span><math><mtext>MTTDL</mtext></math></span>) and the Expected Annual Fraction of Effective Data Loss (<span><math><mtext>EAFEDL</mtext></math></span>) reliability metric, which assesses losses at the file, object, or block, level. The results obtained demonstrate that, for realistic values of bit error rates, reliability is affected by the presence of latent errors. The effect of system parameters on reliability is assessed by conducting a sensitivity evaluation. The reliability improvement achieved by employing erasure coding schemes with increased capability is demonstrated. The theoretical results derived can be used to dimension and provision tape libraries to provide desired levels of data durability.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"169 ","pages":"Article 102501"},"PeriodicalIF":1.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144338331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Multiserver Job Queuing Model with two job classes and Cox-2 service times 具有两个作业类和Cox-2服务时间的多服务器作业队列模型
IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-01 Epub Date: 2025-05-19 DOI: 10.1016/j.peva.2025.102486
Adityo Anggraito , Diletta Olliaro , Andrea Marin , Marco Ajmone Marsan
Datacenters comprise a variety of resources (processors, memory, input/output modules, etc.) that are shared among requests for the execution of computing jobs submitted by datacenter users. Jobs differ in their frequency of arrivals, demand for resources, and execution times. Resource sharing generates contention, especially in heavily loaded systems, that must therefore implement effective scheduling policies for incoming jobs. The First-In First-Out (FIFO) policy is often used for batch jobs, but may produce under-utilization of resources, in terms of wasted servers. This is due to the fact that a job that requires many resources can block jobs arriving later that could be served because they require fewer resources. The mathematical construct often used to study this problem is the Multiserver Job Queuing Model (MJQM), where servers represent resources which are requested and used by jobs in different quantities. Unfortunately, very few explicit results are known for the MJQM, especially at realistic system loads (i.e., before saturation), and hardly any considers the case of non-exponential service time distributions. In this paper, we propose the first exact analytical model of the non-saturated MJQM in case of two classes of customers with service times having 2-phase Coxian distribution. Our analysis is based on the matrix geometric method. Our results provide insight into datacenter dynamics, thus supporting the design of more complex schedulers, capable of improving performance and energy consumption within large datacenters.
数据中心包含各种资源(处理器、内存、输入/输出模块等),这些资源在执行数据中心用户提交的计算作业的请求之间共享。作业的到达频率、资源需求和执行时间各不相同。资源共享会产生争用,特别是在负载沉重的系统中,因此必须为传入的作业实现有效的调度策略。先进先出(FIFO)策略通常用于批处理作业,但就浪费服务器而言,可能会导致资源利用率不足。这是因为需要大量资源的作业可能会阻塞稍后到达的作业,因为这些作业需要较少的资源。通常用于研究此问题的数学结构是多服务器作业排队模型(MJQM),其中服务器表示作业以不同数量请求和使用的资源。不幸的是,很少有关于MJQM的明确结果,特别是在实际系统负载下(即,在饱和之前),而且几乎没有考虑到非指数服务时间分布的情况。本文提出了两类服务时间具有两相协差分布的客户的非饱和MJQM的第一个精确解析模型。我们的分析是基于矩阵几何方法。我们的结果提供了对数据中心动态的洞察,从而支持更复杂的调度器的设计,能够提高大型数据中心的性能和能耗。
{"title":"The Multiserver Job Queuing Model with two job classes and Cox-2 service times","authors":"Adityo Anggraito ,&nbsp;Diletta Olliaro ,&nbsp;Andrea Marin ,&nbsp;Marco Ajmone Marsan","doi":"10.1016/j.peva.2025.102486","DOIUrl":"10.1016/j.peva.2025.102486","url":null,"abstract":"<div><div>Datacenters comprise a variety of resources (processors, memory, input/output modules, etc.) that are shared among requests for the execution of computing jobs submitted by datacenter users. Jobs differ in their frequency of arrivals, demand for resources, and execution times. Resource sharing generates contention, especially in heavily loaded systems, that must therefore implement effective scheduling policies for incoming jobs. The First-In First-Out (FIFO) policy is often used for batch jobs, but may produce under-utilization of resources, in terms of wasted servers. This is due to the fact that a job that requires many resources can block jobs arriving later that could be served because they require fewer resources. The mathematical construct often used to study this problem is the Multiserver Job Queuing Model (MJQM), where servers represent resources which are requested and used by jobs in different quantities. Unfortunately, very few explicit results are known for the MJQM, especially at realistic system loads (i.e., before saturation), and hardly any considers the case of non-exponential service time distributions. In this paper, we propose the first exact analytical model of the non-saturated MJQM in case of two classes of customers with service times having 2-phase Coxian distribution. Our analysis is based on the matrix geometric method. Our results provide insight into datacenter dynamics, thus supporting the design of more complex schedulers, capable of improving performance and energy consumption within large datacenters.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"169 ","pages":"Article 102486"},"PeriodicalIF":1.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144131426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Performance Evaluation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1