首页 > 最新文献

IEEE Transactions on Emerging Topics in Computing最新文献

英文 中文
Designing Mobile Technologies to Encourage Civic Engagement: The Role of Situated Motivational Affordances 设计鼓励公民参与的移动技术:情景动机的作用
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-07-24 DOI: 10.1109/TETC.2023.3296772
Mónica Sánchez de Francisco;Paloma Díaz;Teresa Onorati;Álvaro Monteron;Ignacio Aedo
Social and ubiquitous computing opens up many opportunities to engage citizens in activities that benefit their communities. Technology is ready and available, but there are still open issues concerning how to engage people in activities that are not extrinsically rewarding or whose impact is not immediately perceived. In this paper, we explore the role that situated motivational affordances can play in encouraging citizens in one of such activities, early warning. With this purpose, we designed and implemented a gamified app, IWarn that was iteratively designed following an action-research process to align the needs and capabilities of two types of stakeholders: emergency managers and citizens. The situated motivational affordances framework was used to lead the evaluation considering the motivational affordances enabled by the app and the situation in which it was used. The IWarn app was evaluated in an in-the-wild deployment where 4 emergency workers and 17 citizens took part in a real exercise for one week. Our results suggest that the gamified elements helped to improve intrinsic and extrinsic motivation and user engagement. This work contributes to the social computing domain by illustrating a use case where carefully designed gamification can help in engaging citizens in participatory processes
社会计算和泛在计算为公民参与有益于社区的活动提供了许多机会。技术已经准备就绪并可以使用,但在如何让人们参与那些没有外在回报或其影响不能立即被感知的活动方面,仍存在一些未决问题。在本文中,我们将探讨情景激励功能在鼓励公民参与此类活动(预警)中所能发挥的作用。为此,我们设计并实施了一款游戏化应用程序--IWarn,该应用程序是在行动研究过程中反复设计的,以满足两类利益相关者(应急管理人员和公民)的需求和能力。在评估过程中,我们采用了 "情景动机承受能力 "框架,考虑到了该应用程序所带来的动机承受能力以及使用该应用程序的情景。在为期一周的实际演习中,4 名应急工作人员和 17 名市民对 IWarn 应用程序进行了评估。结果表明,游戏化元素有助于提高内在和外在动机以及用户参与度。这项工作为社会计算领域做出了贡献,它展示了一个经过精心设计的游戏化案例,可以帮助公民参与到参与过程中来。
{"title":"Designing Mobile Technologies to Encourage Civic Engagement: The Role of Situated Motivational Affordances","authors":"Mónica Sánchez de Francisco;Paloma Díaz;Teresa Onorati;Álvaro Monteron;Ignacio Aedo","doi":"10.1109/TETC.2023.3296772","DOIUrl":"10.1109/TETC.2023.3296772","url":null,"abstract":"Social and ubiquitous computing opens up many opportunities to engage citizens in activities that benefit their communities. Technology is ready and available, but there are still open issues concerning how to engage people in activities that are not extrinsically rewarding or whose impact is not immediately perceived. In this paper, we explore the role that situated motivational affordances can play in encouraging citizens in one of such activities, early warning. With this purpose, we designed and implemented a gamified app, IWarn that was iteratively designed following an action-research process to align the needs and capabilities of two types of stakeholders: emergency managers and citizens. The situated motivational affordances framework was used to lead the evaluation considering the motivational affordances enabled by the app and the situation in which it was used. The IWarn app was evaluated in an in-the-wild deployment where 4 emergency workers and 17 citizens took part in a real exercise for one week. Our results suggest that the gamified elements helped to improve intrinsic and extrinsic motivation and user engagement. This work contributes to the social computing domain by illustrating a use case where carefully designed gamification can help in engaging citizens in participatory processes","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10192506","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62529024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-Preserving Authentication Protocols for IoT Devices Using the SiRF PUF 使用 SiRF PUF 的物联网设备隐私保护认证协议
IF 5.9 2区 计算机科学 Q1 Computer Science Pub Date : 2023-07-20 DOI: 10.1109/TETC.2023.3296016
Jim Plusquellic;Eirini Eleni Tsiropoulou;Cyrus Minwalla
Authentication between IoT devices is important for maintaining security, trust and data integrity in an edge device ecosystem. The low-power, reduced computing capacity of the IoT device makes public-private, certificate-based forms of authentication impractical, while other lighter-weight, symmetric cryptography-based approaches, such as message authentication codes, are easy to spoof in unsupervised environments where adversaries have direct physical access to the device. Such environments are better served by security primitives rooted in the hardware with capabilities exceeding those available in cryptography-only frameworks. A key foundational hardware security primitive is the physical unclonable function or PUF. PUFs are well known for removing the need to store secrets in secure non-volatile memories, and for providing very large sets of authentication credentials. In this article, we describe two PUF-based mutual authentication protocols rooted in the entropy provided by a strong PUF. The security properties of the authentication protocols, called COBRA and PARCE, are evaluated in hardware experiments on SoC-based FPGAs, and under extended industrial-standard operating conditions. A codesign-based system architecture is presented in which the SiRF PUF and core authentication functions are implemented in the programmable logic as a secure enclave, while network and database operations are implemented in software on an embedded microprocessor.
物联网设备之间的身份验证对于维护边缘设备生态系统的安全性、信任度和数据完整性非常重要。物联网设备的低功耗和计算能力降低,使得基于公私证书的认证形式变得不切实际,而其他基于对称密码学的轻量级方法(如消息认证码)在对手可以直接物理访问设备的无监督环境中很容易被欺骗。在这种环境下,植根于硬件的安全基元更有优势,其功能超过了纯密码学框架。物理不可解密函数(PUF)是一种关键的基础硬件安全基元。PUF 因无需在安全的非易失性存储器中存储秘密和提供大量验证凭证而闻名。在本文中,我们将介绍两种基于 PUF 的相互验证协议,它们都植根于强 PUF 提供的熵。我们在基于 SoC 的 FPGA 硬件实验中,并在扩展的工业标准操作条件下,评估了名为 COBRA 和 PARCE 的认证协议的安全性能。介绍了一种基于代码设计的系统架构,其中 SiRF PUF 和核心认证功能在可编程逻辑中作为安全飞地实现,而网络和数据库操作则在嵌入式微处理器上通过软件实现。
{"title":"Privacy-Preserving Authentication Protocols for IoT Devices Using the SiRF PUF","authors":"Jim Plusquellic;Eirini Eleni Tsiropoulou;Cyrus Minwalla","doi":"10.1109/TETC.2023.3296016","DOIUrl":"10.1109/TETC.2023.3296016","url":null,"abstract":"Authentication between IoT devices is important for maintaining security, trust and data integrity in an edge device ecosystem. The low-power, reduced computing capacity of the IoT device makes public-private, certificate-based forms of authentication impractical, while other lighter-weight, symmetric cryptography-based approaches, such as message authentication codes, are easy to spoof in unsupervised environments where adversaries have direct physical access to the device. Such environments are better served by security primitives rooted in the hardware with capabilities exceeding those available in cryptography-only frameworks. A key foundational hardware security primitive is the physical unclonable function or PUF. PUFs are well known for removing the need to store secrets in secure non-volatile memories, and for providing very large sets of authentication credentials. In this article, we describe two PUF-based mutual authentication protocols rooted in the entropy provided by a strong PUF. The security properties of the authentication protocols, called COBRA and PARCE, are evaluated in hardware experiments on SoC-based FPGAs, and under extended industrial-standard operating conditions. A codesign-based system architecture is presented in which the SiRF PUF and core authentication functions are implemented in the programmable logic as a secure enclave, while network and database operations are implemented in software on an embedded microprocessor.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":null,"pages":null},"PeriodicalIF":5.9,"publicationDate":"2023-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62529012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Error in Ulps of the Multiplication or Division by a Correctly-Rounded Function or Constant in Binary Floating-Point Arithmetic 二进制浮点运算中被正确取整的函数或常数乘除的 Ulps 误差
IF 5.9 2区 计算机科学 Q1 Computer Science Pub Date : 2023-07-18 DOI: 10.1109/TETC.2023.3294986
Nicolas Brisebarre;Jean-Michel Muller;Joris Picot
Assume we use a binary floating-point arithmetic and that $operatorname{RN}$ is the round-to-nearest function. Also assume that $c$ is a constant or a real function of one or more variables, and that we have at our disposal a correctly rounded implementation of $c$, say $hat{c}= operatorname{RN}(c)$. For evaluating $x cdot c$ (resp. $ x / c$ or $c / x$), the natural way is to replace it by $operatorname{RN}(x cdot hat{c})$ (resp. $ operatorname{RN}(x / hat{c})$ or $operatorname{RN}(hat{c}/ x)$), that is, to call function $hat{c}$ and to perform a floating-point multiplication or division. This can be generalized to the approximation of $n/d$ by $operatorname{RN}(hat{n}/hat{d})$ and the approximation of $n cdot d$ by $operatorname{RN}(hat{n} cdot hat{d})$, where $hat{n} = operatorname{RN}(n)$ and $hat{d} = operatorname{RN}(d)$, and $n$ and $d$ are functions for which we have at our disposal a correctly rounded implementation. We discuss tight error bounds in ulps of such approximations. From our results, one immediately obtains tight error bounds for calculations such as $mathtt {x * pi}$, $mathtt {ln(2)/x}$, $mathtt {x/(y+z)}$, $mathtt {(x+y)*z}$, $mathtt {x/sqrt(y)}$, $mathtt {sqrt(x)/{y}}$, $mathtt {(x+y)(z+t)}$, $mathtt {(x+y)/(z+t)}$, $mathtt {(x+y)/(zt)}$, etc. in floating-point arithmetic.
假设我们使用二进制浮点运算,$operatorname{RN}$ 是四舍五入函数。还假设 $c$ 是一个常数或一个或多个变量的实函数,并且我们有一个正确舍入的 $c$ 实现,例如 $hat{c}= operatorname{RN}(c)$.对于计算 $x cdot c$(即 $ x / c$ 或 $c / x$),自然的方法是用 $operatorname{RN}(x cdot hat{c})$ 替换它(即 $ operatorname{RN}(x cdot hat{c})$ 替换它)。$ operatorname{RN}(x / hat{c})$ 或 $operatorname{RN}(hat{c}/ x)$),也就是说,调用函数 $hat{c}$ 并执行浮点乘法或除法。这可以推广到用 $operatorname{RN}(hat{n}/hat{d})$ 近似 $n/d$ 和用 $operatorname{RN}(hat{n} cdot hat{d})$ 近似 $n cdot d$、其中,$hat{n} = operatorname{RN}(n)$ 和 $hat{d} = operatorname{RN}(d)$ ,而 $n$ 和 $d$ 是我们可以正确舍入实现的函数。我们将讨论这种近似的 ulps 紧误差边界。从我们的结果中,我们可以立即得到诸如 $mathtt {x * pi}$, $mathtt {ln(2)/x}$, $mathtt {x/(y+z)}$ 等计算的严格误差边界、$mathtt {(x+y)*z}$, $mathtt {x/sqrt(y)}$, $mathtt {sqrt(x)/{y}}$, $mathtt {(x+y)(z+t)}$, $mathtt {(x+y)/(z+t)}$, $mathtt {(x+y)/(zt)}$, 等等。在浮点运算中。
{"title":"Error in Ulps of the Multiplication or Division by a Correctly-Rounded Function or Constant in Binary Floating-Point Arithmetic","authors":"Nicolas Brisebarre;Jean-Michel Muller;Joris Picot","doi":"10.1109/TETC.2023.3294986","DOIUrl":"10.1109/TETC.2023.3294986","url":null,"abstract":"Assume we use a binary floating-point arithmetic and that \u0000<inline-formula><tex-math>$operatorname{RN}$</tex-math></inline-formula>\u0000 is the round-to-nearest function. Also assume that \u0000<inline-formula><tex-math>$c$</tex-math></inline-formula>\u0000 is a constant or a real function of one or more variables, and that we have at our disposal a correctly rounded implementation of \u0000<inline-formula><tex-math>$c$</tex-math></inline-formula>\u0000, say \u0000<inline-formula><tex-math>$hat{c}= operatorname{RN}(c)$</tex-math></inline-formula>\u0000. For evaluating \u0000<inline-formula><tex-math>$x cdot c$</tex-math></inline-formula>\u0000 (resp. \u0000<inline-formula><tex-math>$ x / c$</tex-math></inline-formula>\u0000 or \u0000<inline-formula><tex-math>$c / x$</tex-math></inline-formula>\u0000), the natural way is to replace it by \u0000<inline-formula><tex-math>$operatorname{RN}(x cdot hat{c})$</tex-math></inline-formula>\u0000 (resp. \u0000<inline-formula><tex-math>$ operatorname{RN}(x / hat{c})$</tex-math></inline-formula>\u0000 or \u0000<inline-formula><tex-math>$operatorname{RN}(hat{c}/ x)$</tex-math></inline-formula>\u0000), that is, to call function \u0000<inline-formula><tex-math>$hat{c}$</tex-math></inline-formula>\u0000 and to perform a floating-point multiplication or division. This can be generalized to the approximation of \u0000<inline-formula><tex-math>$n/d$</tex-math></inline-formula>\u0000 by \u0000<inline-formula><tex-math>$operatorname{RN}(hat{n}/hat{d})$</tex-math></inline-formula>\u0000 and the approximation of \u0000<inline-formula><tex-math>$n cdot d$</tex-math></inline-formula>\u0000 by \u0000<inline-formula><tex-math>$operatorname{RN}(hat{n} cdot hat{d})$</tex-math></inline-formula>\u0000, where \u0000<inline-formula><tex-math>$hat{n} = operatorname{RN}(n)$</tex-math></inline-formula>\u0000 and \u0000<inline-formula><tex-math>$hat{d} = operatorname{RN}(d)$</tex-math></inline-formula>\u0000, and \u0000<inline-formula><tex-math>$n$</tex-math></inline-formula>\u0000 and \u0000<inline-formula><tex-math>$d$</tex-math></inline-formula>\u0000 are functions for which we have at our disposal a correctly rounded implementation. We discuss tight error bounds in ulps of such approximations. From our results, one immediately obtains tight error bounds for calculations such as \u0000<inline-formula><tex-math>$mathtt {x * pi}$</tex-math></inline-formula>\u0000, \u0000<inline-formula><tex-math>$mathtt {ln(2)/x}$</tex-math></inline-formula>\u0000, \u0000<inline-formula><tex-math>$mathtt {x/(y+z)}$</tex-math></inline-formula>\u0000, \u0000<inline-formula><tex-math>$mathtt {(x+y)*z}$</tex-math></inline-formula>\u0000, \u0000<inline-formula><tex-math>$mathtt {x/sqrt(y)}$</tex-math></inline-formula>\u0000, \u0000<inline-formula><tex-math>$mathtt {sqrt(x)/{y}}$</tex-math></inline-formula>\u0000, \u0000<inline-formula><tex-math>$mathtt {(x+y)(z+t)}$</tex-math></inline-formula>\u0000, \u0000<inline-formula><tex-math>$mathtt {(x+y)/(z+t)}$</tex-math></inline-formula>\u0000, \u0000<inline-formula><tex-math>$mathtt {(x+y)/(zt)}$</tex-math></inline-formula>\u0000, etc. in floating-point arithmetic.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":null,"pages":null},"PeriodicalIF":5.9,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62528977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GRAPHIC: Gather and Process Harmoniously in the Cache With High Parallelism and Flexibility 图形:在高速缓存中协调地进行收集和处理,实现高度并行性和灵活性
IF 5.9 2区 计算机科学 Q1 Computer Science Pub Date : 2023-07-17 DOI: 10.1109/TETC.2023.3290683
Yiming Chen;Mingyen Lee;Guohao Dai;Mufeng Zhou;Nagadastagiri Challapalle;Tianyi Wang;Yao Yu;Yongpan Liu;Yu Wang;Huazhong Yang;Vijaykrishnan Narayanan;Xueqing Li
In-memory computing (IMC) has been proposed to overcome the von Neumann bottleneck in data-intensive applications. However, existing IMC solutions could not achieve both high parallelism and high flexibility, which limits their application in more general scenarios: As a highly parallel IMC design, the functionality of a MAC crossbar is limited to the matrix-vector multiplication; Another IMC method of logic-in-memory (LiM) is more flexible in supporting different logic functions, but has low parallelism. To improve the LiM parallelism, we are inspired by investigating how the single-instruction, multiple-data (SIMD) instruction set in conventional CPU could potentially help to expand the number of LiM operands in one cycle. The biggest challenge is the inefficiency in handling non-continuous data in parallel due to the SIMD limitation of (i) continuous address, (ii) limited cache bandwidth, and (iii) large full-resolution parallel computing overheads. This article presents GRAPHIC, the first reported in-memory SIMD architecture that solves the parallelism and irregular data access challenges in applying SIMD to LiM. GRAPHIC exploits content-addressable memory (CAM) and row-wise-accessible SRAM. By providing the in-situ, full-parallelism, and low-overhead operations of address search, cache read-compute-and-update, GRAPHIC accomplishes high-efficiency gather and aggregation with high parallelism, high energy efficiency, low latency, and low area overheads. Experiments in both continuous data access and irregular data pattern applications show an average speedup of 5x over iso-area AVX-like LiM, and 3-5x over the emerging CAM-based accelerators of CAPE and GaaS-X in advanced techniques.
为了克服数据密集型应用中的冯-诺依曼瓶颈,人们提出了内存计算(IMC)方案。然而,现有的 IMC 解决方案无法同时实现高并行性和高灵活性,这限制了它们在更广泛应用场景中的应用:作为一种高度并行的 IMC 设计,MAC 横条的功能仅限于矩阵-向量乘法;另一种 IMC 方法--内存逻辑(LiM)在支持不同逻辑功能方面更加灵活,但并行性较低。为了提高 LiM 的并行性,我们受到启发,研究传统 CPU 中的单指令多数据(SIMD)指令集如何可能有助于在一个周期内扩展 LiM 操作数。由于 SIMD 存在以下限制:(1)连续地址;(2)有限的高速缓存带宽;(3)较大的全分辨率并行计算开销,因此最大的挑战是并行处理非连续数据的效率低下。本文介绍了 GRAPHIC,这是首个报道的内存 SIMD 架构,它解决了将 SIMD 应用于 LiM 的并行性和不规则数据访问难题。GRAPHIC 利用了内容可寻址内存 (CAM) 和行向可访问 SRAM。通过提供原位、全并行和低开销的地址搜索、高速缓存读取-计算-更新操作,GRAPHIC 以高并行性、高能效、低延迟和低面积开销实现了高效率的收集和聚合。在连续数据访问和不规则数据模式应用中进行的实验表明,GRAPHIC 比等面积 AVX 类 LiM 平均提速 5 倍,比 CAPE 和 GaaS-X 等新兴基于 CAM 的高级技术加速器提速 3-5 倍。
{"title":"GRAPHIC: Gather and Process Harmoniously in the Cache With High Parallelism and Flexibility","authors":"Yiming Chen;Mingyen Lee;Guohao Dai;Mufeng Zhou;Nagadastagiri Challapalle;Tianyi Wang;Yao Yu;Yongpan Liu;Yu Wang;Huazhong Yang;Vijaykrishnan Narayanan;Xueqing Li","doi":"10.1109/TETC.2023.3290683","DOIUrl":"10.1109/TETC.2023.3290683","url":null,"abstract":"In-memory computing (IMC) has been proposed to overcome the von Neumann bottleneck in data-intensive applications. However, existing IMC solutions could not achieve both high parallelism and high flexibility, which limits their application in more general scenarios: As a highly parallel IMC design, the functionality of a MAC crossbar is limited to the matrix-vector multiplication; Another IMC method of logic-in-memory (LiM) is more flexible in supporting different logic functions, but has low parallelism. To improve the LiM parallelism, we are inspired by investigating how the single-instruction, multiple-data (SIMD) instruction set in conventional CPU could potentially help to expand the number of LiM operands in one cycle. The biggest challenge is the inefficiency in handling non-continuous data in parallel due to the SIMD limitation of (i) continuous address, (ii) limited cache bandwidth, and (iii) large full-resolution parallel computing overheads. This article presents GRAPHIC, the first reported in-memory SIMD architecture that solves the parallelism and irregular data access challenges in applying SIMD to LiM. GRAPHIC exploits content-addressable memory (CAM) and row-wise-accessible SRAM. By providing the in-situ, full-parallelism, and low-overhead operations of address search, cache read-compute-and-update, GRAPHIC accomplishes high-efficiency gather and aggregation with high parallelism, high energy efficiency, low latency, and low area overheads. Experiments in both continuous data access and irregular data pattern applications show an average speedup of 5x over iso-area AVX-like LiM, and 3-5x over the emerging CAM-based accelerators of CAPE and GaaS-X in advanced techniques.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":null,"pages":null},"PeriodicalIF":5.9,"publicationDate":"2023-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62528360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
New Construction of Balanced Codes Based on Weights of Data for DNA Storage 基于数据权重的 DNA 存储平衡编码新构建
IF 5.9 2区 计算机科学 Q1 Computer Science Pub Date : 2023-07-13 DOI: 10.1109/TETC.2023.3293477
Xiaozhou Lu;Sunghwan Kim
As maintaining a proper balanced GC content is crucial for minimizing errors in DNA storage, constructing GC-balanced DNA codes has become an important research topic. In this article, we propose a novel code construction method based on the weight distribution of the data, which enables us to construct GC-balanced DNA codes. Additionally, we introduce a specific encoding process for both balanced and imbalanced data parts. One of the key differences between the proposed codes and existing codes is that the parity lengths of the proposed codes are variable depending on the data parts, while the parity lengths of existing codes remain fixed. To evaluate the effectiveness of the proposed codes, we compare their average parity lengths to those of existing codes. Our results demonstrate that the proposed codes have significantly shorter average parity lengths for DNA sequences with appropriate GC contents.
保持适当平衡的 GC 含量对于减少 DNA 存储中的错误至关重要,因此构建 GC 平衡 DNA 代码已成为一个重要的研究课题。在本文中,我们提出了一种基于数据权重分布的新型代码构建方法,它使我们能够构建 GC 平衡 DNA 代码。此外,我们还为平衡和不平衡数据部分引入了特定的编码过程。拟议代码与现有代码的主要区别之一是,拟议代码的奇偶校验长度可根据数据部分的不同而变化,而现有代码的奇偶校验长度则保持固定。为了评估拟议编码的有效性,我们将其平均奇偶校验长度与现有编码的平均奇偶校验长度进行了比较。结果表明,对于具有适当 GC 含量的 DNA 序列,建议的编码具有明显较短的平均奇偶校验长度。
{"title":"New Construction of Balanced Codes Based on Weights of Data for DNA Storage","authors":"Xiaozhou Lu;Sunghwan Kim","doi":"10.1109/TETC.2023.3293477","DOIUrl":"10.1109/TETC.2023.3293477","url":null,"abstract":"As maintaining a proper balanced GC content is crucial for minimizing errors in DNA storage, constructing GC-balanced DNA codes has become an important research topic. In this article, we propose a novel code construction method based on the weight distribution of the data, which enables us to construct GC-balanced DNA codes. Additionally, we introduce a specific encoding process for both balanced and imbalanced data parts. One of the key differences between the proposed codes and existing codes is that the parity lengths of the proposed codes are variable depending on the data parts, while the parity lengths of existing codes remain fixed. To evaluate the effectiveness of the proposed codes, we compare their average parity lengths to those of existing codes. Our results demonstrate that the proposed codes have significantly shorter average parity lengths for DNA sequences with appropriate GC contents.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":null,"pages":null},"PeriodicalIF":5.9,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62528959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CRAM-Based Acceleration for Intermittent Computing of Parallelizable Tasks 基于 CRAM 的可并行任务间歇计算加速技术
IF 5.9 2区 计算机科学 Q1 Computer Science Pub Date : 2023-07-12 DOI: 10.1109/TETC.2023.3293426
Khakim Akhunov;Kasım Sinan Yıldırım
There is an emerging requirement for performing data-intensive parallel computations, e.g., machine-learning inference, locally on batteryless sensors. These devices are resource-constrained and operate intermittently due to the irregular energy availability in the environment. Intermittent execution might lead to several side effects that might prevent the correct execution of computational tasks. Even though recent studies proposed methods to cope with these side effects and execute these tasks correctly, they overlooked the efficient intermittent execution of parallelizable data-intensive machine-learning tasks. In this article, we present PiMCo—a novel programmable CRAM-based in-memory coprocessor that exploits the Processing In-Memory (PIM) paradigm and facilitates the power-failure resilient execution of parallelizable computational loads. Contrary to existing PIM solutions for intermittent computing, PiMCo promotes better programmability to accelerate a variety of parallelizable tasks. Our performance evaluation demonstrates that PiMCo improves the performance of existing low-power accelerators for intermittent computing by up to 8× and energy efficiency by up to 150×.
在本地无电池传感器上执行数据密集型并行计算(如机器学习推理)的需求不断出现。由于环境中的能源供应不稳定,这些设备受到资源限制,只能间歇运行。间歇性执行可能会导致一些副作用,妨碍计算任务的正确执行。尽管最近的研究提出了应对这些副作用并正确执行这些任务的方法,但它们忽略了可并行化的数据密集型机器学习任务的高效间歇执行。在本文中,我们介绍了 PiMCo--一种基于 CRAM 的新型可编程内存协处理器,它利用内存处理(PIM)范例,促进了可并行计算负载的电源故障弹性执行。与现有的间歇计算 PIM 解决方案不同,PiMCo 具有更好的可编程性,可加速各种可并行的任务。我们的性能评估结果表明,PiMCo 可将用于间歇计算的现有低功耗加速器的性能提高 8 倍,能效提高 150 倍。
{"title":"CRAM-Based Acceleration for Intermittent Computing of Parallelizable Tasks","authors":"Khakim Akhunov;Kasım Sinan Yıldırım","doi":"10.1109/TETC.2023.3293426","DOIUrl":"10.1109/TETC.2023.3293426","url":null,"abstract":"There is an emerging requirement for performing data-intensive parallel computations, e.g., machine-learning inference, locally on batteryless sensors. These devices are resource-constrained and operate intermittently due to the irregular energy availability in the environment. Intermittent execution might lead to several side effects that might prevent the correct execution of computational tasks. Even though recent studies proposed methods to cope with these side effects and execute these tasks correctly, they overlooked the efficient intermittent execution of parallelizable data-intensive machine-learning tasks. In this article, we present PiMCo—a novel programmable CRAM-based in-memory coprocessor that exploits the Processing In-Memory (PIM) paradigm and facilitates the power-failure resilient execution of parallelizable computational loads. Contrary to existing PIM solutions for intermittent computing, PiMCo promotes better programmability to accelerate a variety of parallelizable tasks. Our performance evaluation demonstrates that PiMCo improves the performance of existing low-power accelerators for intermittent computing by up to 8× and energy efficiency by up to 150×.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":null,"pages":null},"PeriodicalIF":5.9,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62528922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3DL-PIM: A Look-Up Table Oriented Programmable Processing in Memory Architecture Based on the 3-D Stacked Memory for Data-Intensive Applications 3DL-PIM:基于三维堆叠存储器的面向查找表的内存可编程处理架构,适用于数据密集型应用
IF 5.9 2区 计算机科学 Q1 Computer Science Pub Date : 2023-07-12 DOI: 10.1109/TETC.2023.3293140
Purab Ranjan Sutradhar;Sathwika Bavikadi;Sai Manoj Pudukotai Dinakarrao;Mark A. Indovina;Amlan Ganguly
Memory-centric computing systems have demonstrated superior performance and efficiency in memory-intensive applications compared to state-of-the-art CPUs and GPUs. 3-D stacked DRAM architectures unlock higher I/O data bandwidth than the traditional 2-D memory architecture and therefore are better suited for incorporating memory-centric processors. However, merely integrating high-precision ALUs in the 3-D stacked memory does not ensure an optimized design since such a design can only achieve a limited utilization of the internal bandwidth of a memory chip and limited operational parallelization. To address this, we propose 3DL-PIM, a 3-D stacked memory-based Processing in Memory (PIM) architecture that locates a plurality of Look-up Table (LUT)-based low-footprint Processing Elements (PE) within the memory banks in order to achieve high parallel computing performance by maximizing data-bandwidth utilization. Instead of relying on the traditional logic-based ALUs, the PEs are formed by clustering a group of programmable LUTs and therefore can be programmed on-the-fly to perform various logic/arithmetic operations. Our simulations show that 3DL-PIM can achieve respectively up to 2.6× higher processing performance at 2.65× higher area efficiency compared to a state-of-the-art 3-D stacked memory-based accelerator.
与最先进的 CPU 和 GPU 相比,以内存为中心的计算系统在内存密集型应用中表现出卓越的性能和效率。与传统的二维内存架构相比,三维堆叠 DRAM 架构可释放更高的 I/O 数据带宽,因此更适合集成以内存为中心的处理器。然而,仅仅在三维堆叠内存中集成高精度 ALU 并不能确保设计的优化,因为这样的设计只能实现有限的内存芯片内部带宽利用率和有限的操作并行化。针对这一问题,我们提出了基于三维堆叠内存的内存中处理(PIM)架构--3DL-PIM,该架构将多个基于查找表(LUT)的低脚本处理单元(PE)置于内存库中,通过最大限度地利用数据带宽来实现高并行计算性能。PE 不依赖于传统的基于逻辑的 ALU,而是由一组可编程 LUT 组成,因此可以通过即时编程来执行各种逻辑/算术运算。我们的模拟结果表明,与最先进的基于内存的三维堆叠加速器相比,3DL-PIM 的处理性能最多可提高 2.6 倍,而面积效率却提高了 2.65 倍。
{"title":"3DL-PIM: A Look-Up Table Oriented Programmable Processing in Memory Architecture Based on the 3-D Stacked Memory for Data-Intensive Applications","authors":"Purab Ranjan Sutradhar;Sathwika Bavikadi;Sai Manoj Pudukotai Dinakarrao;Mark A. Indovina;Amlan Ganguly","doi":"10.1109/TETC.2023.3293140","DOIUrl":"10.1109/TETC.2023.3293140","url":null,"abstract":"Memory-centric computing systems have demonstrated superior performance and efficiency in memory-intensive applications compared to state-of-the-art CPUs and GPUs. 3-D stacked DRAM architectures unlock higher I/O data bandwidth than the traditional 2-D memory architecture and therefore are better suited for incorporating memory-centric processors. However, merely integrating high-precision ALUs in the 3-D stacked memory does not ensure an optimized design since such a design can only achieve a limited utilization of the internal bandwidth of a memory chip and limited operational parallelization. To address this, we propose 3DL-PIM, a 3-D stacked memory-based Processing in Memory (PIM) architecture that locates a plurality of Look-up Table (LUT)-based low-footprint Processing Elements (PE) within the memory banks in order to achieve high parallel computing performance by maximizing data-bandwidth utilization. Instead of relying on the traditional logic-based ALUs, the PEs are formed by clustering a group of programmable LUTs and therefore can be programmed on-the-fly to perform various logic/arithmetic operations. Our simulations show that 3DL-PIM can achieve respectively up to 2.6× higher processing performance at 2.65× higher area efficiency compared to a state-of-the-art 3-D stacked memory-based accelerator.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":null,"pages":null},"PeriodicalIF":5.9,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62528914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Graph-Incorporated Latent Factor Analysis Model for High-Dimensional and Sparse Data 针对高维稀疏数据的图并入潜在因素分析模型
IF 5.9 2区 计算机科学 Q1 Computer Science Pub Date : 2023-07-11 DOI: 10.1109/TETC.2023.3292866
Di Wu;Yi He;Xin Luo
A High-dimensional and sparse (HiDS) matrix is frequently encountered in Big Data-related applications such as e-commerce systems or wireless sensor networks. It is of great significance to perform highly accurate representation learning on an HiDS matrix due to the great desires of extracting latent knowledge from it. Latent factor analysis (LFA), which represents an HiDS matrix by learning the low-rank embeddings based on its observed entries only, is one of the most effective and efficient approaches to this issue. However, most existing LFA-based models directly perform such embeddings on an HiDS matrix without exploiting its hidden graph structures, resulting in accuracy loss. To aid this issue, this paper proposes a graph-incorporated latent factor analysis (GLFA) model. It adopts two-fold ideas: 1) a graph is constructed for identifying the hidden high-order interaction (HOI) among nodes described by an HiDS matrix, and 2) a recurrent LFA structure is carefully designed with the incorporation of HOI, thereby improving the representation learning ability of a resultant model. Experimental results on three real-world datasets demonstrate that GLFA outperforms six state-of-the-art models in predicting the missing data of an HiDS matrix, which evidently supports its strong representation learning ability to HiDS data.
在电子商务系统或无线传感器网络等大数据相关应用中,经常会遇到高维稀疏(HiDS)矩阵。由于从 HiDS 矩阵中提取潜在知识的需求很大,因此对 HiDS 矩阵进行高精度表示学习具有重要意义。潜因分析(LFA)是解决这一问题的最有效和最高效的方法之一,它通过学习仅基于观察项的低秩嵌入来表示 HiDS 矩阵。然而,大多数现有的基于 LFA 的模型都是直接对 HiDS 矩阵进行嵌入,而没有利用其隐藏的图结构,从而导致准确率下降。为了解决这个问题,本文提出了一种图并入潜在因素分析(GLFA)模型。它采用了两方面的理念:1)构建一个图,用于识别 HiDS 矩阵所描述的节点间隐藏的高阶交互(HOI);2)结合 HOI 精心设计一个循环 LFA 结构,从而提高结果模型的表征学习能力。在三个实际数据集上的实验结果表明,GLFA 在预测 HiDS 矩阵的缺失数据方面优于六个最先进的模型,这充分证明了它对 HiDS 数据的强大表征学习能力。
{"title":"A Graph-Incorporated Latent Factor Analysis Model for High-Dimensional and Sparse Data","authors":"Di Wu;Yi He;Xin Luo","doi":"10.1109/TETC.2023.3292866","DOIUrl":"10.1109/TETC.2023.3292866","url":null,"abstract":"A High-dimensional and \u0000<underline>s</u>\u0000parse (HiDS) matrix is frequently encountered in Big Data-related applications such as e-commerce systems or wireless sensor networks. It is of great significance to perform highly accurate representation learning on an HiDS matrix due to the great desires of extracting latent knowledge from it. \u0000<underline>L</u>\u0000atent \u0000<underline>f</u>\u0000actor \u0000<underline>a</u>\u0000nalysis (LFA), which represents an HiDS matrix by learning the low-rank embeddings based on its observed entries only, is one of the most effective and efficient approaches to this issue. However, most existing LFA-based models directly perform such embeddings on an HiDS matrix without exploiting its hidden graph structures, resulting in accuracy loss. To aid this issue, this paper proposes a \u0000<underline>g</u>\u0000raph-incorporated \u0000<underline>l</u>\u0000atent \u0000<underline>f</u>\u0000actor \u0000<underline>a</u>\u0000nalysis (GLFA) model. It adopts two-fold ideas: 1) a graph is constructed for identifying the hidden \u0000<underline>h</u>\u0000igh-\u0000<underline>o</u>\u0000rder \u0000<underline>i</u>\u0000nteraction (HOI) among nodes described by an HiDS matrix, and 2) a recurrent LFA structure is carefully designed with the incorporation of HOI, thereby improving the representation learning ability of a resultant model. Experimental results on three real-world datasets demonstrate that GLFA outperforms six state-of-the-art models in predicting the missing data of an HiDS matrix, which evidently supports its strong representation learning ability to HiDS data.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":null,"pages":null},"PeriodicalIF":5.9,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62528864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
PISA: A Non-Volatile Processing-in-Sensor Accelerator for Imaging Systems PISA:用于成像系统的非易失性传感器处理加速器
IF 5.9 2区 计算机科学 Q1 Computer Science Pub Date : 2023-07-11 DOI: 10.1109/TETC.2023.3292251
Shaahin Angizi;Sepehr Tabrizchi;David Z. Pan;Arman Roohi
This work proposes a Processing-In-Sensor Accelerator, namely PISA, as a flexible, energy-efficient, and high-performance solution for real-time and smart image processing in AI devices. PISA intrinsically implements a coarse-grained convolution operation in Binarized-Weight Neural Networks (BWNNs) leveraging a novel compute-pixel with non-volatile weight storage at the sensor side. This remarkably reduces the power consumption of data conversion and transmission to an off-chip processor. The design is completed with a bit-wise near-sensor in-memory computing unit to process the remaining network layers. Once the object is detected, PISA switches to typical sensing mode to capture the image for a fine-grained convolution using only a near-sensor processing unit. Our circuit-to-application co-simulation results on a BWNN acceleration demonstrate minor accuracy degradation on various image datasets in coarse-grained evaluation compared to baseline BWNN models, while PISA achieves a frame rate of 1000 and efficiency of $sim$ 1.74 TOp/s/W. Lastly, PISA substantially reduces data conversion and transmission energy by $sim$ 84% compared to a baseline.
本研究提出了一种 "传感器内处理加速器"(即 PISA),作为一种灵活、节能、高性能的解决方案,用于人工智能设备中的实时智能图像处理。PISA 利用传感器端的非易失性权重存储的新型计算像素,在二值化权重神经网络(BWNN)中本质上实现了粗粒度卷积操作。这大大降低了数据转换和传输到片外处理器的功耗。设计完成后,还需要一个比特近传感器内存计算单元来处理剩余的网络层。一旦检测到物体,PISA 就会切换到典型传感模式,仅使用一个近传感器处理单元捕捉图像,进行细粒度卷积。与基线 BWNN 模型相比,我们在 BWNN 加速上进行的电路到应用协同仿真结果表明,在粗粒度评估中,各种图像数据集的准确性略有下降,而 PISA 实现了 1000 帧的帧速率和 $sim$ 1.74 TOp/s/W 的效率。最后,与基线相比,PISA 大幅降低了 84% 的数据转换和传输能耗。
{"title":"PISA: A Non-Volatile Processing-in-Sensor Accelerator for Imaging Systems","authors":"Shaahin Angizi;Sepehr Tabrizchi;David Z. Pan;Arman Roohi","doi":"10.1109/TETC.2023.3292251","DOIUrl":"10.1109/TETC.2023.3292251","url":null,"abstract":"This work proposes a Processing-In-Sensor Accelerator, namely PISA, as a flexible, energy-efficient, and high-performance solution for real-time and smart image processing in AI devices. PISA intrinsically implements a coarse-grained convolution operation in Binarized-Weight Neural Networks (BWNNs) leveraging a novel compute-pixel with non-volatile weight storage at the sensor side. This remarkably reduces the power consumption of data conversion and transmission to an off-chip processor. The design is completed with a bit-wise near-sensor in-memory computing unit to process the remaining network layers. Once the object is detected, PISA switches to typical sensing mode to capture the image for a fine-grained convolution using only a near-sensor processing unit. Our circuit-to-application co-simulation results on a BWNN acceleration demonstrate minor accuracy degradation on various image datasets in coarse-grained evaluation compared to baseline BWNN models, while PISA achieves a frame rate of 1000 and efficiency of \u0000<inline-formula><tex-math>$sim$</tex-math></inline-formula>\u0000 1.74 TOp/s/W. Lastly, PISA substantially reduces data conversion and transmission energy by \u0000<inline-formula><tex-math>$sim$</tex-math></inline-formula>\u0000 84% compared to a baseline.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":null,"pages":null},"PeriodicalIF":5.9,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62528789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
FINISH: Efficient and Scalable NMF-Based Federated Learning for Detecting Malware Activities FINISH:基于 NMF 的高效、可扩展的联合学习,用于检测恶意软件活动
IF 5.9 2区 计算机科学 Q1 Computer Science Pub Date : 2023-07-11 DOI: 10.1109/TETC.2023.3292924
Yu-Wei Chang;Hong-Yen Chen;Chansu Han;Tomohiro Morikawa;Takeshi Takahashi;Tsung-Nan Lin
5G networks with the vast number of devices pose security threats. Manual analysis of such extensive security data is complex. Dark-NMF can detect malware activities by monitoring unused IP address space, i.e., the darknet. However, the challenges of cooperative training for Dark-NMF are immense computational complexity with Big Data, communication overhead, and privacy concern with darknet sensor IP addresses. Darknet sensors can observe multivariate time series of packets from the same hosts, represented as intersecting columns in different data matrices. Previous works do not consider intersecting columns, losing a host's semantics because they do not aggregate the host's time series. To solve these problems, we proposed a federated IoT malware detection NMF for intersecting source hosts (FINISH) algorithm for offloading computing tasks to 5G multiaccess edge computing (MEC). The experiments show that FINISH is scalable to a data size with a shorter computational time and has a lower false positive detection performance than Dark-NMF. The comparison results demonstrate that FINISH has better computation and communication efficiency than related works and a short communication time, taking only 1/10 the execution time in a simulated 5G MEC. The experimental results can provide substantial insights into developing federated cybersecurity in the future.
拥有大量设备的 5G 网络会带来安全威胁。对如此广泛的安全数据进行人工分析非常复杂。Dark-NMF 可以通过监控未使用的 IP 地址空间(即暗网)来检测恶意软件活动。然而,Dark-NMF 在合作训练方面面临的挑战是大数据带来的巨大计算复杂性、通信开销以及暗网传感器 IP 地址带来的隐私问题。暗网传感器可以观察到来自同一主机的多变量时间序列数据包,这些数据包在不同的数据矩阵中表现为交叉列。以前的工作没有考虑到相交列,从而失去了主机的语义,因为它们没有汇总主机的时间序列。为了解决这些问题,我们提出了一种联合物联网恶意软件检测 NMF for intersecting source hosts(FINISH)算法,用于将计算任务卸载到 5G 多接入边缘计算(MEC)。实验表明,与Dark-NMF相比,FINISH可扩展到更大的数据规模,计算时间更短,误报检测性能更低。对比结果表明,与相关研究相比,FINISH 的计算和通信效率更高,通信时间更短,在模拟的 5G MEC 中仅需 1/10 的执行时间。实验结果可为未来联盟网络安全的发展提供重要启示。
{"title":"FINISH: Efficient and Scalable NMF-Based Federated Learning for Detecting Malware Activities","authors":"Yu-Wei Chang;Hong-Yen Chen;Chansu Han;Tomohiro Morikawa;Takeshi Takahashi;Tsung-Nan Lin","doi":"10.1109/TETC.2023.3292924","DOIUrl":"10.1109/TETC.2023.3292924","url":null,"abstract":"5G networks with the vast number of devices pose security threats. Manual analysis of such extensive security data is complex. Dark-NMF can detect malware activities by monitoring unused IP address space, i.e., the darknet. However, the challenges of cooperative training for Dark-NMF are immense computational complexity with Big Data, communication overhead, and privacy concern with darknet sensor IP addresses. Darknet sensors can observe multivariate time series of packets from the same hosts, represented as intersecting columns in different data matrices. Previous works do not consider intersecting columns, losing a host's semantics because they do not aggregate the host's time series. To solve these problems, we proposed a federated IoT malware detection NMF for intersecting source hosts (FINISH) algorithm for offloading computing tasks to 5G multiaccess edge computing (MEC). The experiments show that FINISH is scalable to a data size with a shorter computational time and has a lower false positive detection performance than Dark-NMF. The comparison results demonstrate that FINISH has better computation and communication efficiency than related works and a short communication time, taking only 1/10 the execution time in a simulated 5G MEC. The experimental results can provide substantial insights into developing federated cybersecurity in the future.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":null,"pages":null},"PeriodicalIF":5.9,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62528873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Emerging Topics in Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1