首页 > 最新文献

Journal of Systems Architecture最新文献

英文 中文
Error correction and erasure codes for robust network steganography 稳健网络隐写术的纠错和擦除码
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-01 DOI: 10.1016/j.sysarc.2024.103191
Jörg Keller , Saskia Imhof , Peter Sobe

Error correction and erasure codes and steganographic channels use related methods, but are investigated separately. We detail an idea from literature for a steganographic channel in a transmission with error correction code and experimentally investigate it with respect to bandwidth, robustness and detectability. We expand this construction to provide an example of multi-level steganography, i.e., a steganographic channel within a steganographic channel. Furthermore, we investigate the advantages on bandwidth and stealthyness that reversibility of such a steganographic channel brings, together with a new proposal for a covert channel in error-corrected data.

纠错码和擦除码与隐写信道使用的是相关方法,但两者是分开研究的。我们详细介绍了文献中关于带有纠错码的传输中的隐写信道的想法,并对其带宽、鲁棒性和可探测性进行了实验研究。我们扩展了这一构造,提供了一个多层次隐写术的例子,即隐写信道中的隐写信道。此外,我们还研究了这种隐写信道的可逆性在带宽和隐蔽性方面带来的优势,并提出了错误校正数据隐写信道的新建议。
{"title":"Error correction and erasure codes for robust network steganography","authors":"Jörg Keller ,&nbsp;Saskia Imhof ,&nbsp;Peter Sobe","doi":"10.1016/j.sysarc.2024.103191","DOIUrl":"10.1016/j.sysarc.2024.103191","url":null,"abstract":"<div><p>Error correction and erasure codes and steganographic channels use related methods, but are investigated separately. We detail an idea from literature for a steganographic channel in a transmission with error correction code and experimentally investigate it with respect to bandwidth, robustness and detectability. We expand this construction to provide an example of multi-level steganography, i.e., a steganographic channel within a steganographic channel. Furthermore, we investigate the advantages on bandwidth and stealthyness that reversibility of such a steganographic channel brings, together with a new proposal for a covert channel in error-corrected data.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"153 ","pages":"Article 103191"},"PeriodicalIF":4.5,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1383762124001280/pdfft?md5=47d9fd62ba7057a5c2b7c08ce55de641&pid=1-s2.0-S1383762124001280-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141282126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CRPIM: An efficient compute-reuse scheme for ReRAM-based Processing-in-Memory DNN accelerators CRPIM:基于 ReRAM 的 "内存处理 DNN "加速器的高效计算重复使用方案
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-05-31 DOI: 10.1016/j.sysarc.2024.103192
Shihao Hong , Yeh-Ching Chung

Resistive random access memory (ReRAM) is a promising technology for AI Processing-in-Memory (PIM) hardware because of its compatibility with CMOS, small footprint, and ability to complete matrix–vector multiplication workloads inside the memory device itself. However, redundant computations are brought on by duplicate weights and inputs when an MVM has to be split into smaller-granularity sequential sub-works in the real world. Recent studies have proposed repetition-pruning to address this issue, but the buffer allocation strategy for enhancing buffer device utilization remains understudied. In preliminary experiments observing input patterns of neural layers with different datasets, the similarity of repetition allows us to transfer the buffer allocation strategy obtained from a small dataset to the computation with a large dataset. Hence, this paper proposes a practical compute-reuse mechanism for ReRAM-based PIM, called CRPIM, which replaces repetitive computations with buffering and reading. Moreover, the subsequent buffer allocation problem is resolved at both inter-layer and intra-layer levels. Our experimental results demonstrate that CRPIM significantly reduces ReRAM cells and execution time while maintaining adequate buffer and energy overhead.

电阻式随机存取存储器(ReRAM)与 CMOS 兼容、占地面积小,而且能够在存储器件内部完成矩阵-矢量乘法工作负载,因此是人工智能内存处理(PIM)硬件的一项前景广阔的技术。然而,在现实世界中,当一个 MVM 需要拆分成粒度更小的连续子工程时,重复权重和输入会带来冗余计算。最近的研究提出了重复剪枝法来解决这一问题,但提高缓冲设备利用率的缓冲分配策略仍未得到充分研究。在观察不同数据集的神经层输入模式的初步实验中,重复的相似性使我们能够将从小数据集获得的缓冲区分配策略转移到大数据集的计算中。因此,本文为基于 ReRAM 的 PIM 提出了一种实用的计算重复使用机制,称为 CRPIM,它以缓冲和读取取代了重复计算。此外,还在层间和层内两个层面解决了后续缓冲区分配问题。我们的实验结果表明,CRPIM 显著减少了 ReRAM 单元和执行时间,同时保持了足够的缓冲区和能源开销。
{"title":"CRPIM: An efficient compute-reuse scheme for ReRAM-based Processing-in-Memory DNN accelerators","authors":"Shihao Hong ,&nbsp;Yeh-Ching Chung","doi":"10.1016/j.sysarc.2024.103192","DOIUrl":"https://doi.org/10.1016/j.sysarc.2024.103192","url":null,"abstract":"<div><p>Resistive random access memory (ReRAM) is a promising technology for AI Processing-in-Memory (PIM) hardware because of its compatibility with CMOS, small footprint, and ability to complete matrix–vector multiplication workloads inside the memory device itself. However, redundant computations are brought on by duplicate weights and inputs when an MVM has to be split into smaller-granularity sequential sub-works in the real world. Recent studies have proposed repetition-pruning to address this issue, but the buffer allocation strategy for enhancing buffer device utilization remains understudied. In preliminary experiments observing input patterns of neural layers with different datasets, the similarity of repetition allows us to transfer the buffer allocation strategy obtained from a small dataset to the computation with a large dataset. Hence, this paper proposes a practical compute-reuse mechanism for ReRAM-based PIM, called CRPIM, which replaces repetitive computations with buffering and reading. Moreover, the subsequent buffer allocation problem is resolved at both inter-layer and intra-layer levels. Our experimental results demonstrate that CRPIM significantly reduces ReRAM cells and execution time while maintaining adequate buffer and energy overhead.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"153 ","pages":"Article 103192"},"PeriodicalIF":4.5,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141249332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lyapunov-guided deep reinforcement learning for delay-aware online task offloading in MEC systems 用于 MEC 系统中延迟感知在线任务卸载的 Lyapunov 引导的深度强化学习
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-05-31 DOI: 10.1016/j.sysarc.2024.103194
Longbao Dai , Jing Mei , Zhibang Yang , Zhao Tong , Cuibin Zeng , Keqin Li

With the arrival of 5G technology and the popularization of the Internet of Things (IoT), mobile edge computing (MEC) has great potential in handling delay-sensitive and compute-intensive (DSCI) applications. Meanwhile, the need for reduced latency and improved energy efficiency in terminal devices is becoming urgent increasingly. However, the users are affected by channel conditions and bursty computational demands in dynamic MEC environments, which can lead to longer task correspondence times. Therefore, finding an efficient task offloading method in stochastic systems is crucial for optimizing system energy consumption. Additionally, the delay due to frequent user–MEC interactions cannot be overlooked. In this article, we initially frame the task offloading issue as a dynamic optimization issue. The goal is to minimize the system’s long-term energy consumption while ensuring the task queue’s stability over the long term. Using the Lyapunov optimization technique, the task processing deadline problem is converted into a stability control problem for the virtual queue. Then, a novel Lyapunov-guided deep reinforcement learning (DRL) for delay-aware offloading algorithm (LyD2OA) is designed. LyD2OA can figure out the task offloading scheme online, and adaptively offload the task with better network quality. Meanwhile, it ensures that deadlines are not violated when offloading tasks in poor communication environments. In addition, we perform a rigorous mathematical analysis of the performance of Ly2DOA and prove the existence of upper bounds on the virtual queue. It is theoretically proven that LyD2OA enables the system to realize the trade-off between energy consumption and delay. Finally, extensive simulation experiments verify that LyD2OA has good performance in minimizing energy consumption and keeping latency low.

随着 5G 技术的到来和物联网(IoT)的普及,移动边缘计算(MEC)在处理延迟敏感和计算密集型(DSCI)应用方面潜力巨大。同时,终端设备对降低延迟和提高能效的需求也日益迫切。然而,在动态 MEC 环境中,用户会受到信道条件和突发计算需求的影响,从而导致任务对应时间延长。因此,在随机系统中找到一种高效的任务卸载方法对于优化系统能耗至关重要。此外,用户与 MEC 之间频繁交互造成的延迟也不容忽视。在本文中,我们首先将任务卸载问题视为一个动态优化问题。我们的目标是在确保任务队列长期稳定的同时,最大限度地降低系统的长期能耗。利用 Lyapunov 优化技术,任务处理截止时间问题被转化为虚拟队列的稳定性控制问题。然后,设计了一种新颖的 Lyapunov 引导的延迟感知卸载深度强化学习(DRL)算法(LyD2OA)。LyD2OA 可以在线找出任务卸载方案,并自适应地卸载网络质量更好的任务。同时,它还能确保在通信环境较差的情况下卸载任务时不会违反截止日期。此外,我们还对 Ly2DOA 的性能进行了严格的数学分析,并证明了虚拟队列上限的存在。理论证明,LyD2OA 使系统能够实现能耗和延迟之间的权衡。最后,大量仿真实验验证了 LyD2OA 在最小化能耗和保持低延迟方面的良好性能。
{"title":"Lyapunov-guided deep reinforcement learning for delay-aware online task offloading in MEC systems","authors":"Longbao Dai ,&nbsp;Jing Mei ,&nbsp;Zhibang Yang ,&nbsp;Zhao Tong ,&nbsp;Cuibin Zeng ,&nbsp;Keqin Li","doi":"10.1016/j.sysarc.2024.103194","DOIUrl":"https://doi.org/10.1016/j.sysarc.2024.103194","url":null,"abstract":"<div><p>With the arrival of 5G technology and the popularization of the Internet of Things (IoT), mobile edge computing (MEC) has great potential in handling delay-sensitive and compute-intensive (DSCI) applications. Meanwhile, the need for reduced latency and improved energy efficiency in terminal devices is becoming urgent increasingly. However, the users are affected by channel conditions and bursty computational demands in dynamic MEC environments, which can lead to longer task correspondence times. Therefore, finding an efficient task offloading method in stochastic systems is crucial for optimizing system energy consumption. Additionally, the delay due to frequent user–MEC interactions cannot be overlooked. In this article, we initially frame the task offloading issue as a dynamic optimization issue. The goal is to minimize the system’s long-term energy consumption while ensuring the task queue’s stability over the long term. Using the Lyapunov optimization technique, the task processing deadline problem is converted into a stability control problem for the virtual queue. Then, a novel Lyapunov-guided deep reinforcement learning (DRL) for delay-aware offloading algorithm (LyD2OA) is designed. LyD2OA can figure out the task offloading scheme online, and adaptively offload the task with better network quality. Meanwhile, it ensures that deadlines are not violated when offloading tasks in poor communication environments. In addition, we perform a rigorous mathematical analysis of the performance of Ly2DOA and prove the existence of upper bounds on the virtual queue. It is theoretically proven that LyD2OA enables the system to realize the trade-off between energy consumption and delay. Finally, extensive simulation experiments verify that LyD2OA has good performance in minimizing energy consumption and keeping latency low.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"153 ","pages":"Article 103194"},"PeriodicalIF":4.5,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141325950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MPR-QUIC: Multi-path partially reliable transmission for priority and deadline-aware video streaming MPR-QUIC:用于优先级和截止日期感知视频流的多路径部分可靠传输
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-05-31 DOI: 10.1016/j.sysarc.2024.103195
Biao Han , Cao Xu , Yahui Li , Xiaoyan Wang , Peng Xun

Video streaming has dominated Internet traffic over the past few years, spurring innovations in transport protocols. The QUIC protocol has advantages over TCP, such as faster connection setup and alleviating head-of-line blocking. Multi-path transport protocols like Multipath QUIC (MPQUIC) have been proposed to aggregate the bandwidth of multiple links and provide reliable transmission in poor network conditions. However, reliable transmission incurs unnecessary retransmission costs for MPQUIC, resulting in deteriorating performance, especially in real-time video streaming. Partially reliable transmission, which supports both reliable and unreliable delivery, may perform better by trading off data reliability and timeliness. In this paper, we introduce MPR-QUIC, a multi-path partially reliable transmission protocol for QUIC. Based on MPQUIC, MPR-QUIC extends unreliable transmission to provide partially reliable transmission over multiple paths. Specific schedulers are designed in MPR-QUIC based on priority and deadline, respectively, for video streaming optimization. Video frames with high priority are transmitted first since frames with low priority cannot be decoded before their arrival. Additionally, to alleviate rebuffering and freezing of the video, as many frames as possible should be delivered before the deadline. We evaluate MPR-QUIC experimentally on a testbed and in emulations. Results show that the rebuffer time of MPR-QUIC is significantly decreased by 60% to 80% when compared to state-of-the-art multi-path transmission solutions. The completion ratio of transmitted data blocks is increased by almost 100%.

在过去几年中,视频流占据了互联网流量的主导地位,推动了传输协议的创新。与 TCP 相比,QUIC 协议具有更快的连接建立速度和减少线路阻塞等优势。多路径传输协议(如多路径 QUIC (MPQUIC))已被提出,用于聚合多个链路的带宽,并在网络条件差的情况下提供可靠的传输。然而,可靠传输会给 MPQUIC 带来不必要的重传成本,导致性能下降,尤其是在实时视频流中。同时支持可靠和不可靠传输的部分可靠传输可以在数据可靠性和及时性之间进行权衡,从而获得更好的性能。本文介绍了 QUIC 的多路径部分可靠传输协议 MPR-QUIC。在 MPQUIC 的基础上,MPR-QUIC 扩展了不可靠传输,提供多路径部分可靠传输。MPR-QUIC 根据优先级和截止时间分别设计了特定的调度器,以优化视频流。优先级高的视频帧首先传输,因为优先级低的帧在到达前无法解码。此外,为了减少视频的回弹和冻结,应在截止日期前传输尽可能多的帧。我们在测试平台和仿真中对 MPR-QUIC 进行了实验评估。结果表明,与最先进的多路径传输解决方案相比,MPR-QUIC 的重缓冲时间大幅减少了 60% 至 80%。传输数据块的完成率几乎提高了 100%。
{"title":"MPR-QUIC: Multi-path partially reliable transmission for priority and deadline-aware video streaming","authors":"Biao Han ,&nbsp;Cao Xu ,&nbsp;Yahui Li ,&nbsp;Xiaoyan Wang ,&nbsp;Peng Xun","doi":"10.1016/j.sysarc.2024.103195","DOIUrl":"https://doi.org/10.1016/j.sysarc.2024.103195","url":null,"abstract":"<div><p>Video streaming has dominated Internet traffic over the past few years, spurring innovations in transport protocols. The QUIC protocol has advantages over TCP, such as faster connection setup and alleviating head-of-line blocking. Multi-path transport protocols like Multipath QUIC (MPQUIC) have been proposed to aggregate the bandwidth of multiple links and provide reliable transmission in poor network conditions. However, reliable transmission incurs unnecessary retransmission costs for MPQUIC, resulting in deteriorating performance, especially in real-time video streaming. Partially reliable transmission, which supports both reliable and unreliable delivery, may perform better by trading off data reliability and timeliness. In this paper, we introduce MPR-QUIC, a multi-path partially reliable transmission protocol for QUIC. Based on MPQUIC, MPR-QUIC extends unreliable transmission to provide partially reliable transmission over multiple paths. Specific schedulers are designed in MPR-QUIC based on priority and deadline, respectively, for video streaming optimization. Video frames with high priority are transmitted first since frames with low priority cannot be decoded before their arrival. Additionally, to alleviate rebuffering and freezing of the video, as many frames as possible should be delivered before the deadline. We evaluate MPR-QUIC experimentally on a testbed and in emulations. Results show that the rebuffer time of MPR-QUIC is significantly decreased by 60% to 80% when compared to state-of-the-art multi-path transmission solutions. The completion ratio of transmitted data blocks is increased by almost 100%.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"153 ","pages":"Article 103195"},"PeriodicalIF":4.5,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141314351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AnoPas: Practical anonymous transit pass from group signatures with time-bound keys AnoPas:利用带有时限密钥的群组签名实现实用的匿名过境通行证
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-05-31 DOI: 10.1016/j.sysarc.2024.103184
Rui Shi , Yang Yang , Yingjiu Li , Huamin Feng , Hwee Hwa Pang , Robert H. Deng

An anonymous transit pass system allows passengers to access transport services within fixed time periods, with their privileges automatically deactivating upon time expiration. Although existing transit pass systems are deployable on powerful devices like PCs, their adaptation to more user-friendly devices, such as mobile phones with smart cards, is inefficient due to their reliance on heavy-weight operations like bilinear maps. In this paper, we introduce an innovative anonymous transit pass system, dubbed AnoPas, optimized for deployment on mobile phones with smart cards, where the smart card is responsible for crucial lightweight operations and the mobile phone handles key-independent and time-consuming tasks. Group signatures with time-bound keys (GS-TBK) serve as our core component, representing a new variant of standard group signatures for the secure use of time-based digital services, preserving users’ privacy while providing flexible authentication services. We first constructed a practical GS-TBK scheme using the tag-based signatures and then applied it to the design of AnoPas. We achieve the most efficient passing protocol compared to the state-of-the-art AnoPas/GS-TBK schemes. We also present an implementation showing that our passing protocol takes around 38.6 ms on a smart card and around 33.6 ms on a mobile phone.

匿名交通卡系统允许乘客在固定的时间段内使用交通服务,并在时间到期后自动失效。虽然现有的交通卡系统可以部署在个人电脑等功能强大的设备上,但由于依赖于双线性映射等重量级操作,这些系统在更方便用户的设备(如带有智能卡的手机)上的适配效率很低。在本文中,我们介绍了一种创新的匿名交通通行证系统,称为 AnoPas,该系统经过优化,可部署在装有智能卡的手机上,其中智能卡负责关键的轻量级操作,而手机则处理与密钥无关的耗时任务。带时限密钥的群组签名(GS-TBK)是我们的核心组件,代表了标准群组签名的一种新变体,可安全使用基于时间的数字服务,在提供灵活的身份验证服务的同时保护用户隐私。我们首先利用基于标签的签名构建了一个实用的 GS-TBK 方案,然后将其应用于 AnoPas 的设计。与最先进的 AnoPas/GS-TBK 方案相比,我们实现了最高效的传递协议。我们还提出了一个实施方案,表明我们的传递协议在智能卡上耗时约 38.6 毫秒,在手机上耗时约 33.6 毫秒。
{"title":"AnoPas: Practical anonymous transit pass from group signatures with time-bound keys","authors":"Rui Shi ,&nbsp;Yang Yang ,&nbsp;Yingjiu Li ,&nbsp;Huamin Feng ,&nbsp;Hwee Hwa Pang ,&nbsp;Robert H. Deng","doi":"10.1016/j.sysarc.2024.103184","DOIUrl":"https://doi.org/10.1016/j.sysarc.2024.103184","url":null,"abstract":"<div><p>An anonymous transit pass system allows passengers to access transport services within fixed time periods, with their privileges automatically deactivating upon time expiration. Although existing transit pass systems are deployable on powerful devices like PCs, their adaptation to more user-friendly devices, such as mobile phones with smart cards, is inefficient due to their reliance on heavy-weight operations like bilinear maps. In this paper, we introduce an innovative anonymous transit pass system, dubbed <span><math><mrow><mi>A</mi><mi>n</mi><mi>o</mi><mi>P</mi><mi>a</mi><mi>s</mi></mrow></math></span>, optimized for deployment on mobile phones with smart cards, where the smart card is responsible for crucial lightweight operations and the mobile phone handles key-independent and time-consuming tasks. Group signatures with time-bound keys (GS-TBK) serve as our core component, representing a new variant of standard group signatures for the secure use of time-based digital services, preserving users’ privacy while providing flexible authentication services. We first constructed a practical GS-TBK scheme using the tag-based signatures and then applied it to the design of AnoPas. We achieve the most efficient passing protocol compared to the state-of-the-art AnoPas/GS-TBK schemes. We also present an implementation showing that our passing protocol takes around 38.6 ms on a smart card and around 33.6 ms on a mobile phone.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"153 ","pages":"Article 103184"},"PeriodicalIF":4.5,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141249331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient and privacy-preserving outsourced unbounded inner product computation in cloud computing 云计算中高效且保护隐私的外包无约束内积计算
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-05-29 DOI: 10.1016/j.sysarc.2024.103190
Jiayun Yan , Jie Chen , Chen Qian , Anmin Fu , Haifeng Qian

In cloud computing, the current challenge lies in managing massive data, which is a computationally overburdened environment for data users. Outsourced computation can effectively ease the memory and computation pressure on overburdened data storage. We propose an outsourced unbounded decryption scheme in the standard assumption and standard model for large data settings based on inner product computation. Security analysis shows that it can achieve adaptive security. The scheme involves the data owner transmitting encrypted data to a third-party cloud server, which is responsible for computing a significant amount of data. Then the ripe data is handed over to the data user for decryption computation. In addition, there is no need to give the prior bounds of the length of the plaintext vector in advance. This allows for the encryption algorithm to run without determining the length of the input data before the setup phase, that is, our scheme is on the unbounded setting. Through theoretical analysis, the storage overhead and communication cost of the data users remain independent of the ciphertext size. The experimental results indicate that the efficiency and performance are greatly enhanced, about 0.03S for data users at the expense of increased computing time on the cloud server.

在云计算领域,当前的挑战在于管理海量数据,这对数据用户来说是一个计算负担过重的环境。外包计算可以有效缓解数据存储不堪重负的内存和计算压力。我们在标准假设和标准模型下提出了一种基于内积计算的外包无界解密方案,适用于海量数据环境。安全分析表明,它可以实现自适应安全。该方案涉及数据所有者将加密数据传输给第三方云服务器,由其负责计算大量数据。然后,将成熟的数据交给数据用户进行解密计算。此外,无需事先给出明文向量的长度界限。这使得加密算法在运行时,无需在设置阶段之前确定输入数据的长度,也就是说,我们的方案是无边界设置。通过理论分析,数据用户的存储开销和通信成本与密文大小无关。实验结果表明,数据用户的效率和性能大大提高,约为 0.03S,而代价是云服务器的计算时间增加。
{"title":"Efficient and privacy-preserving outsourced unbounded inner product computation in cloud computing","authors":"Jiayun Yan ,&nbsp;Jie Chen ,&nbsp;Chen Qian ,&nbsp;Anmin Fu ,&nbsp;Haifeng Qian","doi":"10.1016/j.sysarc.2024.103190","DOIUrl":"https://doi.org/10.1016/j.sysarc.2024.103190","url":null,"abstract":"<div><p>In cloud computing, the current challenge lies in managing massive data, which is a computationally overburdened environment for data users. Outsourced computation can effectively ease the memory and computation pressure on overburdened data storage. We propose an outsourced unbounded decryption scheme in the standard assumption and standard model for large data settings based on inner product computation. Security analysis shows that it can achieve adaptive security. The scheme involves the data owner transmitting encrypted data to a third-party cloud server, which is responsible for computing a significant amount of data. Then the ripe data is handed over to the data user for decryption computation. In addition, there is no need to give the prior bounds of the length of the plaintext vector in advance. This allows for the encryption algorithm to run without determining the length of the input data before the setup phase, that is, our scheme is on the unbounded setting. Through theoretical analysis, the storage overhead and communication cost of the data users remain independent of the ciphertext size. The experimental results indicate that the efficiency and performance are greatly enhanced, about 0.03S for data users at the expense of increased computing time on the cloud server.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"153 ","pages":"Article 103190"},"PeriodicalIF":4.5,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141249330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WOPE: A write-optimized and parallel-efficient B+-tree for persistent memory WOPE:针对持久内存的写优化并行高效 B[公式省略]树
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-05-28 DOI: 10.1016/j.sysarc.2024.103187
Xianyu He, Runyu Zhang, Pengpeng Tian, Lening Zhou, Min Lian, Chaoshu Yang

Emerging Persistent Memory (PM) usually has the serious drawback of expensive write activities. Thus, existing PM-oriented B+-trees mainly concentrate on alleviating the write overhead (i.e., reducing PM writes and flush instructions). Unfortunately, due to the improper data organization in the sorted leaf node, existing solutions cause massive data migration when data inserting or node splitting occurs. In this paper, we propose a write-optimized PM-oriented B+-tree with aligned flush and selective migration, called WPB+-tree, to solve the above problems. WPB+-tree first adopts a buffer-assisted mechanism that temporarily stores the newly inserted data to reduce the overhead of entry shifts. Second, WPB+-tree employs a selective migration of node entries scheme to achieve less than half of the data migration when a node is split. Moreover, existing PM-oriented B+-trees usually employ a coarse-grained lock to avoid thread conflicts, which can severely degrade the concurrency efficiency. Thus, we further propose a fine-grained lock technique for WPB+-tree, namely, parallel-efficient WPB+-tree (WOPE), to improve the concurrency efficiency. We implement the proposed WPB+-tree and WOPE on Linux and conduct extensive evaluations with actual persistent memory, where WOPE achieves 23.5%, 30.7%, and 15.3% of performance improvement (insert, read, and scan) over the straightforward solutions (i.e., SSB-Tree, Fast&Fair, and wB+tree), and 10.1% of performance improvement over WPB+-tree, on average.

新兴的持久内存(PM)通常都有一个严重的缺点,那就是写入活动成本高昂。因此,现有的面向持久内存的 B 树主要集中在减少写开销(即减少持久内存写入和刷新指令)上。遗憾的是,由于排序叶节点的数据组织不当,现有解决方案会在数据插入或节点拆分时造成大量数据迁移。为了解决上述问题,我们在本文中提出了一种写优化的面向 PM 的 B 树,它具有对齐刷新和选择性迁移功能,称为 WPB-tree。WPB-tree 首先采用缓冲辅助机制,临时存储新插入的数据,以减少入口转移的开销。其次,WPB-tree 采用节点条目的选择性迁移方案,在节点分裂时实现少于一半的数据迁移。此外,现有的面向 PM 的 B 树通常采用粗粒度锁来避免线程冲突,这会严重降低并发效率。因此,我们进一步提出了一种针对 WPB 树的细粒度锁技术,即并行高效 WPB 树(WOPE),以提高并发效率。我们在 Linux 上实现了所提出的 WPB-tree 和 WOPE,并用实际的持久内存进行了广泛的评估,结果表明 WOPE 比直接的解决方案(即 SSB-Tree、Fast&Fair 和 wBtree)分别提高了 23.5%、30.7% 和 15.3% 的性能(、、和),比 WPB-tree 平均提高了 10.1% 的性能。
{"title":"WOPE: A write-optimized and parallel-efficient B+-tree for persistent memory","authors":"Xianyu He,&nbsp;Runyu Zhang,&nbsp;Pengpeng Tian,&nbsp;Lening Zhou,&nbsp;Min Lian,&nbsp;Chaoshu Yang","doi":"10.1016/j.sysarc.2024.103187","DOIUrl":"10.1016/j.sysarc.2024.103187","url":null,"abstract":"<div><p>Emerging Persistent Memory (PM) usually has the serious drawback of expensive write activities. Thus, existing PM-oriented B<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span>-trees mainly concentrate on alleviating the write overhead (i.e., reducing PM writes and flush instructions). Unfortunately, due to the improper data organization in the sorted leaf node, existing solutions cause massive data migration when data inserting or node splitting occurs. In this paper, we propose a write-optimized PM-oriented B<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span>-tree with aligned flush and selective migration, called WPB<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span>-tree, to solve the above problems. WPB<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span>-tree first adopts a buffer-assisted mechanism that temporarily stores the newly inserted data to reduce the overhead of entry shifts. Second, WPB<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span>-tree employs a selective migration of node entries scheme to achieve less than half of the data migration when a node is split. Moreover, existing PM-oriented B<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span>-trees usually employ a coarse-grained lock to avoid thread conflicts, which can severely degrade the concurrency efficiency. Thus, we further propose a fine-grained lock technique for WPB<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span>-tree, namely, parallel-efficient WPB<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span>-tree (WOPE), to improve the concurrency efficiency. We implement the proposed WPB<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span>-tree and WOPE on Linux and conduct extensive evaluations with actual persistent memory, where WOPE achieves 23.5%, 30.7%, and 15.3% of performance improvement (<em>insert</em>, <em>read</em>, and <em>scan</em>) over the straightforward solutions (i.e., SSB-Tree, Fast&amp;Fair, and wB<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span>tree), and 10.1% of performance improvement over WPB<span><math><msup><mrow></mrow><mrow><mo>+</mo></mrow></msup></math></span>-tree, on average.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"153 ","pages":"Article 103187"},"PeriodicalIF":4.5,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141193112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A conflict-free CAN-to-TSN scheduler for CAN-TSN gateway 用于 CAN-TSN 网关的无冲突 CAN-toTSN 调度器
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-05-28 DOI: 10.1016/j.sysarc.2024.103188
Wenyan Yan , Bin Fu , Jing Huang , Ruiqi Lu , Renfa Li , Guoqi Xie

The automotive Electrical/Electronic (E/E) architecture with Time-Sensitive Networking (TSN) as the backbone network and Controller Area Network (CAN) as the intra-domain network has attracted extensive research attention. In this architecture, the CAN-TSN gateway serves as a vital hub for communication between the CAN and TSN networks. However, with frequent information exchange between domains, multiple real-time applications inevitably compete for the same network resources. The limited availability of schedule table entries and bandwidth allocation pose challenges in scheduling design. To mitigate the transmission conflicts at the CAN-TSN gateway, this paper proposes a CAN-to-TSN scheduler consisting of two primary stages. The first stage introduces the Message Aggregation Optimization (MAO) algorithm to aggregate multiple CAN messages into a single TSN message, ultimately decreasing the communication overhead and the schedule table entries number. The second stage proposes the Exploratory Message Scheduling Optimization (EMSO) algorithm based on MAO. EMSO disaggregates and reassembles the CAN messages with small deadlines within the currently un-scheduled TSN message to improve the acceptance ratio of CAN messages. Experimental results demonstrate that EMSO achieves an average acceptance ratio of CAN messages 4.3% higher in preemptive mode and 8.2% higher in non-preemptive mode in TSN than state-of-the-art algorithms.

以时敏网络(TSN)为骨干网络,以控制器局域网(CAN)为域内网络的汽车电气/电子(E/E)架构已引起广泛的研究关注。在这种架构中,CAN-TSN 网关是 CAN 和 TSN 网络之间进行通信的重要枢纽。然而,由于域间信息交换频繁,多个实时应用不可避免地会争夺相同的网络资源。调度表项的有限可用性和带宽分配给调度设计带来了挑战。为缓解 CAN-TSN 网关上的传输冲突,本文提出了一种由两个主要阶段组成的 CAN-to-TSN 调度器。第一阶段引入报文聚合优化(MAO)算法,将多条 CAN 报文聚合成一条 TSN 报文,最终减少通信开销和调度表条目数。第二阶段在 MAO 的基础上提出了探索性报文调度优化(EMSO)算法。EMSO 在当前未调度的 TSN 报文中分解并重新组装截止日期较小的 CAN 报文,以提高 CAN 报文的接受率。实验结果表明,与最先进的算法相比,EMSO 在 TSN 的抢占式模式下实现的 CAN 报文平均接收率提高了 4.3%,在非抢占式模式下提高了 8.2%。
{"title":"A conflict-free CAN-to-TSN scheduler for CAN-TSN gateway","authors":"Wenyan Yan ,&nbsp;Bin Fu ,&nbsp;Jing Huang ,&nbsp;Ruiqi Lu ,&nbsp;Renfa Li ,&nbsp;Guoqi Xie","doi":"10.1016/j.sysarc.2024.103188","DOIUrl":"https://doi.org/10.1016/j.sysarc.2024.103188","url":null,"abstract":"<div><p>The automotive Electrical/Electronic (E/E) architecture with Time-Sensitive Networking (TSN) as the backbone network and Controller Area Network (CAN) as the intra-domain network has attracted extensive research attention. In this architecture, the CAN-TSN gateway serves as a vital hub for communication between the CAN and TSN networks. However, with frequent information exchange between domains, multiple real-time applications inevitably compete for the same network resources. The limited availability of schedule table entries and bandwidth allocation pose challenges in scheduling design. To mitigate the transmission conflicts at the CAN-TSN gateway, this paper proposes a CAN-to-TSN scheduler consisting of two primary stages. The first stage introduces the Message Aggregation Optimization (MAO) algorithm to aggregate multiple CAN messages into a single TSN message, ultimately decreasing the communication overhead and the schedule table entries number. The second stage proposes the Exploratory Message Scheduling Optimization (EMSO) algorithm based on MAO. EMSO disaggregates and reassembles the CAN messages with small deadlines within the currently un-scheduled TSN message to improve the acceptance ratio of CAN messages. Experimental results demonstrate that EMSO achieves an average acceptance ratio of CAN messages 4.3% higher in preemptive mode and 8.2% higher in non-preemptive mode in TSN than state-of-the-art algorithms.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"153 ","pages":"Article 103188"},"PeriodicalIF":4.5,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141291238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallel GEMM-based convolutions for deep learning on multicore ARM and RISC-V architectures 在多核 ARM 和 RISC-V 架构上基于 GEMM 的并行卷积进行深度学习
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-05-24 DOI: 10.1016/j.sysarc.2024.103186
Héctor Martínez , Sandra Catalán , Adrián Castelló , Enrique S. Quintana-Ortí

We present high performance, multi-threaded implementations of three GEMM-based convolution algorithms for multicore processors with ARM and RISC-V architectures. The codes are integrated into CONVLIB, a library that has the following unique features: (1) scripts to automatically generate a key component of GEMM, known as the micro-kernel, which is typically written in assembly language; (2) a modified analytical model to automatically tune the algorithms to the underlying cache architecture; (3) the ability to select four hyper-parameters: micro-kernel, cache parameters, parallel loop, and GEMM algorithm dynamically between calls to the library, without recompiling it; and (4) a driver to identify the best hyper-parameters. In addition, we provide a detailed performance evaluation of the convolution algorithms, on five ARM and RISC-V processors, and we publicly release the codes.

我们介绍了针对 ARM 和 RISC-V 架构多核处理器的三种基于 GEMM 的卷积算法的高性能多线程实现。这些代码被集成到 CONVLIB 库中,该库具有以下独特功能:(1) 自动生成 GEMM 关键组件(即通常用汇编语言编写的微内核)的脚本;(2) 根据底层高速缓存架构自动调整算法的改进分析模型;(3) 在调用库之间动态选择微内核、高速缓存参数、并行循环和 GEMM 算法这四个超参数的能力,而无需重新编译;(4) 识别最佳超参数的驱动程序。此外,我们还在五种 ARM 和 RISC-V 处理器上对卷积算法进行了详细的性能评估,并公开发布了代码。
{"title":"Parallel GEMM-based convolutions for deep learning on multicore ARM and RISC-V architectures","authors":"Héctor Martínez ,&nbsp;Sandra Catalán ,&nbsp;Adrián Castelló ,&nbsp;Enrique S. Quintana-Ortí","doi":"10.1016/j.sysarc.2024.103186","DOIUrl":"10.1016/j.sysarc.2024.103186","url":null,"abstract":"<div><p>We present high performance, multi-threaded implementations of three GEMM-based convolution algorithms for multicore processors with ARM and RISC-V architectures. The codes are integrated into CONVLIB, a library that has the following unique features: (1) scripts to automatically generate a key component of GEMM, known as the micro-kernel, which is typically written in assembly language; (2) a modified analytical model to automatically tune the algorithms to the underlying cache architecture; (3) the ability to select four hyper-parameters: micro-kernel, cache parameters, parallel loop, and GEMM algorithm dynamically between calls to the library, without recompiling it; and (4) a driver to identify the best hyper-parameters. In addition, we provide a detailed performance evaluation of the convolution algorithms, on five ARM and RISC-V processors, and we publicly release the codes.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"153 ","pages":"Article 103186"},"PeriodicalIF":4.5,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141142502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designated-tester Identity-Based Authenticated Encryption with Keyword Search with applications in cloud systems 基于身份验证的指定测试者加密与关键字搜索,在云系统中的应用
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-05-23 DOI: 10.1016/j.sysarc.2024.103183
Danial Shiraly , Ziba Eslami , Nasrollah Pakniat

The advent of cloud computing has made cloud server outsourcing increasingly popular among data owners. However, the storage of sensitive data on cloud servers engenders serious challenges for the security and privacy of data. Public Key Authenticated Encryption with Keyword Search (PAEKS) is an effective method that protects information confidentiality and supports keyword searches. Identity-Based Authenticated Encryption with Keyword Search (IBAEKS) is a PAEKS variant in identity-based settings, designed for solving the intractable certificate management problem. To the best of our knowledge, only two IBAEKS schemes exist in the literature, both presented with weak security models that make them vulnerable against what is known as Fully Chosen Keyword attacks. Moreover, the existing IBAEKS schemes are based on the time-consuming bilinear pairing operation, leading to a significant increase in computational cost. To overcome these issues, in this paper, we first propose an enhanced security model for IBAEKS and compare it with existing models. We then prove that the existing IBAEKS schemes are not secure in our enhanced model. We also propose an efficient pairing-free dIBAEKS scheme and prove that it is secure under the enhanced security model. Finally, we compare our proposed scheme with related constructions to indicate its overall superiority.

云计算的出现使云服务器外包越来越受到数据所有者的青睐。然而,在云服务器上存储敏感数据给数据的安全性和隐私性带来了严峻的挑战。带关键字搜索的公钥认证加密(PAEKS)是一种有效的方法,既能保护信息机密性,又能支持关键字搜索。基于身份的关键字搜索认证加密(IBAEKS)是 PAEKS 在基于身份设置下的变体,旨在解决棘手的证书管理问题。据我们所知,文献中只存在两种 IBAEKS 方案,这两种方案的安全模型都很薄弱,容易受到所谓的全选关键词攻击。此外,现有的 IBAEKS 方案都基于耗时的双线性配对操作,导致计算成本大幅增加。为了克服这些问题,本文首先提出了 IBAEKS 的增强安全模型,并与现有模型进行了比较。然后,我们证明现有的 IBAEKS 方案在我们的增强模型中并不安全。我们还提出了一种高效的无配对 dIBAEKS 方案,并证明它在增强安全模型下是安全的。最后,我们将提出的方案与相关结构进行比较,以说明其整体优越性。
{"title":"Designated-tester Identity-Based Authenticated Encryption with Keyword Search with applications in cloud systems","authors":"Danial Shiraly ,&nbsp;Ziba Eslami ,&nbsp;Nasrollah Pakniat","doi":"10.1016/j.sysarc.2024.103183","DOIUrl":"10.1016/j.sysarc.2024.103183","url":null,"abstract":"<div><p>The advent of cloud computing has made cloud server outsourcing increasingly popular among data owners. However, the storage of sensitive data on cloud servers engenders serious challenges for the security and privacy of data. Public Key Authenticated Encryption with Keyword Search (PAEKS) is an effective method that protects information confidentiality and supports keyword searches. Identity-Based Authenticated Encryption with Keyword Search (IBAEKS) is a PAEKS variant in identity-based settings, designed for solving the intractable certificate management problem. To the best of our knowledge, only two IBAEKS schemes exist in the literature, both presented with weak security models that make them vulnerable against what is known as Fully Chosen Keyword attacks. Moreover, the existing IBAEKS schemes are based on the time-consuming bilinear pairing operation, leading to a significant increase in computational cost. To overcome these issues, in this paper, we first propose an enhanced security model for IBAEKS and compare it with existing models. We then prove that the existing IBAEKS schemes are not secure in our enhanced model. We also propose an efficient pairing-free dIBAEKS scheme and prove that it is secure under the enhanced security model. Finally, we compare our proposed scheme with related constructions to indicate its overall superiority.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"152 ","pages":"Article 103183"},"PeriodicalIF":4.5,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141132233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Systems Architecture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1