首页 > 最新文献

Journal of Parallel and Distributed Computing最新文献

英文 中文
A General-Purpose K-Nearest Neighbor Method with an Efficient Pruning Strategy for GPUs 基于高效修剪策略的通用k近邻算法
IF 4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-10-20 DOI: 10.1016/j.jpdc.2025.105187
Jue Wang , Fumihiko Ino
K-nearest neighbor (kNN) search is widely applied to low- and high-dimensional tasks, as well as various data distributions and distance functions. However, its computational cost increases with the data volume, causing a bottleneck for many applications. The workload of the existing tree-based methods linearly increases with the neighbor count k in the worst case. In addition, some tree-based methods only apply to tasks with L2 distances and may have severe warp divergence when employed on GPUs. Our goal is to develop a general-purpose kNN method based on cluster sorting to achieve better pruning efficiency compared with tree-based approaches. We optimize the proposed method to achieve higher performance on tasks with different dimensionalities or distance functions. The proposed Sort, TraversE, and then Prune (STEP) algorithm is a kNN method that clusters the data points beforehand. With various 1) numbers of data points, 2) numbers of query points, 3) neighbor counts, 4) dimensions, and 5) distance metrics, the STEP method offers high performance because of the following aspects. First, our method prunes the data points efficiently by sorting the clusters for each query. Second, we exploit the single-instruction multiple-threads (SIMT) architecture of the GPU and utilize both coarse- and fine-grained parallelism to accelerate computation. The proposed method concurrently computes all queries and minimizes warp divergence by assigning a query to a GPU warp. Third, the STEP method rapidly updates the kNN results using bitonic operations. Fourth, we proposed an adaptive approach that automatically switches from the indexing approach to the exhaustive approach to achieve good scalability on high-dimensional data. Finally, we develop a variant of Gärtner’s bounding sphere algorithm so that our indexing method can handle distance metrics other than the L2 distance. The STEP method achieves a 15.9 times speedup with L2 distances and a 36.7 times speedup with angular distances compared with other state-of-the-art methods.
k -最近邻(kNN)搜索广泛应用于低维和高维任务,以及各种数据分布和距离函数。但是,它的计算成本随着数据量的增加而增加,成为许多应用的瓶颈。在最坏的情况下,现有的基于树的方法的工作量随着邻居数k的增加而线性增加。此外,一些基于树的方法只适用于具有L2距离的任务,并且在gpu上使用时可能会有严重的偏差。我们的目标是开发一种基于聚类排序的通用kNN方法,以实现比基于树的方法更好的修剪效率。我们对提出的方法进行了优化,使其在具有不同维度或距离函数的任务上获得更高的性能。提出的Sort, TraversE, and then Prune (STEP)算法是一种预先聚类数据点的kNN方法。STEP方法具有各种(1)数据点数、(2)查询点数、(3)邻居数、(4)维数和(5)距离度量),由于以下方面的原因,该方法提供了高性能。首先,我们的方法通过对每个查询的聚类进行排序来有效地修剪数据点。其次,我们利用GPU的单指令多线程(SIMT)架构,并利用粗粒度和细粒度并行来加速计算。提出的方法并发计算所有查询,并通过将查询分配给GPU warp来最小化warp发散。第三,STEP方法使用双元运算快速更新kNN结果。第四,我们提出了一种自适应方法,可以自动从索引方法切换到穷举方法,以实现高维数据的良好可扩展性。最后,我们开发了Gärtner的边界球算法的一个变体,以便我们的索引方法可以处理L2距离以外的距离度量。与其他最先进的方法相比,STEP方法在L2距离上实现了15.9倍的加速,在角距离上实现了36.7倍的加速。
{"title":"A General-Purpose K-Nearest Neighbor Method with an Efficient Pruning Strategy for GPUs","authors":"Jue Wang ,&nbsp;Fumihiko Ino","doi":"10.1016/j.jpdc.2025.105187","DOIUrl":"10.1016/j.jpdc.2025.105187","url":null,"abstract":"<div><div><span><math><mi>K</mi></math></span>-nearest neighbor (<span><math><mi>k</mi></math></span>NN) search is widely applied to low- and high-dimensional tasks, as well as various data distributions and distance functions. However, its computational cost increases with the data volume, causing a bottleneck for many applications. The workload of the existing tree-based methods linearly increases with the neighbor count <span><math><mi>k</mi></math></span> in the worst case. In addition, some tree-based methods only apply to tasks with L2 distances and may have severe warp divergence when employed on GPUs. Our goal is to develop a general-purpose <span><math><mi>k</mi></math></span>NN method based on cluster sorting to achieve better pruning efficiency compared with tree-based approaches. We optimize the proposed method to achieve higher performance on tasks with different dimensionalities or distance functions. The proposed Sort, TraversE, and then Prune (STEP) algorithm is a <span><math><mi>k</mi></math></span>NN method that clusters the data points beforehand. With various 1) numbers of data points, 2) numbers of query points, 3) neighbor counts, 4) dimensions, and 5) distance metrics, the STEP method offers high performance because of the following aspects. First, our method prunes the data points efficiently by sorting the clusters for each query. Second, we exploit the single-instruction multiple-threads (SIMT) architecture of the GPU and utilize both coarse- and fine-grained parallelism to accelerate computation. The proposed method concurrently computes all queries and minimizes warp divergence by assigning a query to a GPU warp. Third, the STEP method rapidly updates the <span><math><mi>k</mi></math></span>NN results using bitonic operations. Fourth, we proposed an adaptive approach that automatically switches from the indexing approach to the exhaustive approach to achieve good scalability on high-dimensional data. Finally, we develop a variant of Gärtner’s bounding sphere algorithm so that our indexing method can handle distance metrics other than the L2 distance. The STEP method achieves a 15.9 times speedup with L2 distances and a 36.7 times speedup with angular distances compared with other state-of-the-art methods.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"207 ","pages":"Article 105187"},"PeriodicalIF":4.0,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145417607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Security vulnerabilities and enhancement of a dynamic auditing scheme for regenerating code-based storage in cloud-fog-assisted IIoT 云雾辅助工业物联网中基于代码的存储再生的动态审计方案的安全漏洞和增强
IF 4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-10-14 DOI: 10.1016/j.jpdc.2025.105185
Guangjun Liu , Jinbo Xiong , Ximeng Liu , Xiang Zou , Chenghu Ke , Zengfa Dou
In a recent publication, Liu et al. put forth a privacy-preserving dynamic auditing scheme for distributed encoded storage systems in cloud-fog-assisted Industrial Internet of Things (IIoT) [Internet of Things, DOI: 10.1016/j.iot.2024.101084]. Each encoded data segment utilizes the ZSS signature for the creation of its corresponding authentication tag. The fog server will be challenged and subjected to rigorous verification through the utilisation of a bilinear pairing map. In this paper, we demonstrate that the security vulnerabilities of Liu et al.’s scheme by mounting a block forgery attack and an identifier forgery attack, respectively. In particular, an adversarial fog server is capable of successfully deceiving the proxy auditor through the implementation of arbitrary unauthorised data tampering or identifier impersonation. We also provide an alternative scheme to address the security weaknesses, and highlight the challenges of cloud data auditing tailored for cloud fog-enabled IIoT.
在最近的一篇论文中,Liu等人提出了一种针对云雾辅助工业物联网(IIoT)中分布式编码存储系统的隐私保护动态审计方案[Internet of Things, DOI: 10.1016/j.iot.2024.101084]。每个编码的数据段都使用ZSS签名来创建其相应的身份验证标记。雾服务器将受到挑战,并通过利用双线性配对地图进行严格的验证。在本文中,我们分别通过块伪造攻击和标识符伪造攻击来证明Liu等人的方案的安全漏洞。特别是,对抗性雾服务器能够通过实现任意未经授权的数据篡改或标识符模拟来成功地欺骗代理审计员。我们还提供了一种替代方案来解决安全弱点,并强调了为支持云雾的IIoT量身定制的云数据审计的挑战。
{"title":"Security vulnerabilities and enhancement of a dynamic auditing scheme for regenerating code-based storage in cloud-fog-assisted IIoT","authors":"Guangjun Liu ,&nbsp;Jinbo Xiong ,&nbsp;Ximeng Liu ,&nbsp;Xiang Zou ,&nbsp;Chenghu Ke ,&nbsp;Zengfa Dou","doi":"10.1016/j.jpdc.2025.105185","DOIUrl":"10.1016/j.jpdc.2025.105185","url":null,"abstract":"<div><div>In a recent publication, Liu et al. put forth a privacy-preserving dynamic auditing scheme for distributed encoded storage systems in cloud-fog-assisted Industrial Internet of Things (IIoT) [Internet of Things, DOI: 10.1016/j.iot.2024.101084]. Each encoded data segment utilizes the ZSS signature for the creation of its corresponding authentication tag. The fog server will be challenged and subjected to rigorous verification through the utilisation of a bilinear pairing map. In this paper, we demonstrate that the security vulnerabilities of Liu et al.’s scheme by mounting a block forgery attack and an identifier forgery attack, respectively. In particular, an adversarial fog server is capable of successfully deceiving the proxy auditor through the implementation of arbitrary unauthorised data tampering or identifier impersonation. We also provide an alternative scheme to address the security weaknesses, and highlight the challenges of cloud data auditing tailored for cloud fog-enabled IIoT.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"207 ","pages":"Article 105185"},"PeriodicalIF":4.0,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145363964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A lightweight fine-grained scheme for distinguishing the hotness of warm data to reduce segment cleaning overhead 一种轻量级的细粒度方案,用于区分热数据的热度,以减少段清理开销
IF 4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-10-10 DOI: 10.1016/j.jpdc.2025.105183
Lihua Yang , Yang Xiao , Zhipeng Tan , Fang Wang , Weizhao Lin , Wei Zhang , Jiaxin Li , Kai Lu
With the widespread adoption of flash memory, the Flash Friendly File System (F2FS) designed to flash memory characteristics has become widely-used in large data centers. However, F2FS encounters from significant cleaning overheads due to its logging scheme writes. We observe that warm data in F2FS account for a substantial proportion, at least 80 %. Nevertheless, the mixed storage of warm data with varying hotness exacerbates segment cleaning challenges. To address this issue, we propose a scheme called M2H, which involves a fine-grained management of warm data hotness identified by the K-means clustering algorithm. M2H determines hotness by considering factors such as file block update distance, most recently used distance, and workload characteristics. M2H facilitates Multi-log delayed writing and Modified segment cleaning based on Hotness. To reduce costs associated with distinguishing data hotness at the file block level, we employ Mini Batch K-means, which is referred to as HMBK. Moreover, for servers equipped with GPUs, the clustering process can be offloaded to the GPU, known as HGPU. We conduct a comprehensive comparison of traditional F2FS, M2H, HMBK, and HGPU on a real platform. Results show that compared to traditional F2FS, HGPU reduces the number of segment cleanings by 54.41 % to 97.93 %.
随着闪存的广泛采用,针对闪存特性而设计的闪存友好文件系统(F2FS)在大型数据中心得到了广泛的应用。然而,由于F2FS的日志模式写入,它会遇到大量的清理开销。我们观察到F2FS中的暖数据占相当大的比例,至少为80%。然而,具有不同热度的热数据的混合存储加剧了段清洗的挑战。为了解决这个问题,我们提出了一个名为M2H的方案,该方案涉及对由K-means聚类算法识别的热数据热度进行细粒度管理。M2H通过考虑文件块更新距离、最近使用的距离和工作负载特征等因素来确定热度。M2H支持多日志延迟写和基于热度的修改段清理。为了降低与在文件块级别区分数据热度相关的成本,我们采用了Mini Batch K-means,即HMBK。此外,对于配备GPU的服务器,可以将集群进程卸载到GPU上,称为HGPU。我们在真实平台上对传统的F2FS、M2H、HMBK和HGPU进行了全面的比较。结果表明,与传统的F2FS相比,HGPU将段清洗次数减少54.41%至97.93%。
{"title":"A lightweight fine-grained scheme for distinguishing the hotness of warm data to reduce segment cleaning overhead","authors":"Lihua Yang ,&nbsp;Yang Xiao ,&nbsp;Zhipeng Tan ,&nbsp;Fang Wang ,&nbsp;Weizhao Lin ,&nbsp;Wei Zhang ,&nbsp;Jiaxin Li ,&nbsp;Kai Lu","doi":"10.1016/j.jpdc.2025.105183","DOIUrl":"10.1016/j.jpdc.2025.105183","url":null,"abstract":"<div><div>With the widespread adoption of flash memory, the Flash Friendly File System (F2FS) designed to flash memory characteristics has become widely-used in large data centers. However, F2FS encounters from significant cleaning overheads due to its logging scheme writes. We observe that warm data in F2FS account for a substantial proportion, at least 80 %. Nevertheless, the mixed storage of warm data with varying hotness exacerbates segment cleaning challenges. To address this issue, we propose a scheme called M2H, which involves a fine-grained management of warm data hotness identified by the K-means clustering algorithm. M2H determines hotness by considering factors such as file block update distance, most recently used distance, and workload characteristics. M2H facilitates <strong>M</strong>ulti-log delayed writing and <strong>M</strong>odified segment cleaning based on <strong>H</strong>otness. To reduce costs associated with distinguishing data hotness at the file block level, we employ Mini Batch K-means, which is referred to as HMBK. Moreover, for servers equipped with GPUs, the clustering process can be offloaded to the GPU, known as HGPU. We conduct a comprehensive comparison of traditional F2FS, M2H, HMBK, and HGPU on a real platform. Results show that compared to traditional F2FS, HGPU reduces the number of segment cleanings by 54.41 % to 97.93 %.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"207 ","pages":"Article 105183"},"PeriodicalIF":4.0,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145417543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Front Matter 1 - Full Title Page (regular issues)/Special Issue Title page (special issues) 封面1 -完整的扉页(每期)/特刊扉页(每期)
IF 4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-09-24 DOI: 10.1016/S0743-7315(25)00143-1
{"title":"Front Matter 1 - Full Title Page (regular issues)/Special Issue Title page (special issues)","authors":"","doi":"10.1016/S0743-7315(25)00143-1","DOIUrl":"10.1016/S0743-7315(25)00143-1","url":null,"abstract":"","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"206 ","pages":"Article 105176"},"PeriodicalIF":4.0,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145120315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SEDViN: Secure embedding for dynamic virtual network requests using a multi-attribute matching game SEDViN:使用多属性匹配游戏安全嵌入动态虚拟网络请求
IF 4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-09-03 DOI: 10.1016/j.jpdc.2025.105171
T.G. Keerthan Kumar , Rahul Kumar , Anirudh Munnur Achal , Anurag Satpathy , Sourav Kanti Addya
Network virtualization (NV) has gained significant attention as it allows service providers (SP) to share substrate network (SN) resources. It is achieved by partitioning them into isolated virtual network requests (VNRs) comprising interrelated virtual machines (VMs) and virtual links (VLs). Although NV provides various advantages, such as service separation, enhanced quality-of-service, reliability, and improved SN utilization, it also presents multiple scientific challenges. In this context, one pivotal challenge encountered by the researchers is secure virtual network embedding (SVNE). The SVNE encompasses assigning SN resources to components of VNR, i.e., VMs and VLs, adhering to the security demands, which is a computationally intractable problem, as it is proven to be NP-Hard. In this context, maximizing the acceptance and revenue-to-cost ratios remains of utmost priority for SPs as it not only increases the revenue but also effectively utilizes the large pool of SN resources. Though VNE is a well-researched problem, the existing literature has the following flaws: (i.) security features of VMs and VLs are ignored, (ii.) limited consideration of topological attributes, and (iii.) restricted to static VNRs. However, SPs need to develop an embedding framework that overcomes the abovementioned pitfalls. Therefore, this work proposes a framework Secure Embedding for Dynamic Virtual Network requests using a multi-attribute matching game (SEDViN). In SedViN, the deferred acceptance algorithm (DAA) based matching game is used for effective embedding. SEDViN operates primarily in two steps to obtain a secure embedding of dynamic VNRs. Firstly, it generates a unified ranking for VMs and servers using a combination of entropy and a technique for order of preference by similarity to the ideal solution (TOPSIS), considering network, security, and system attributes. Taking these as inputs, in the second step, VNR embedding is conducted using the deferred acceptance approach based on a one-to-many matching strategy for VM embedding and VL embedding using the shortest path algorithm. The performance of SEDViN is evaluated through simulations and compared against different baseline approaches. The simulation outcomes exhibit that SEDViN surpasses the baselines with a gain of 56% in the acceptance and 44% in the revenue-to-cost ratios.
网络虚拟化(NV)由于允许服务提供商(SP)共享基板网络(SN)资源而受到广泛关注。这是通过将它们划分为相互隔离的虚拟网络请求(vnr)来实现的,这些请求由相互关联的虚拟机(vm)和虚拟链路(VLs)组成。NV虽然具有业务分离、增强服务质量、提高可靠性、提高SN利用率等诸多优点,但在科学上也面临着诸多挑战。在这种背景下,研究人员遇到的一个关键挑战是安全虚拟网络嵌入(SVNE)。SVNE包括将SN资源分配给VNR的组件,即vm和vl,以满足安全需求,这是一个计算上难以解决的问题,被证明是NP-Hard。在这种情况下,最大限度地提高接受度和收入成本比仍然是服务提供商的首要任务,因为这不仅可以增加收入,还可以有效地利用大量的网络资源池。虽然VNE是一个研究得很好的问题,但现有文献存在以下缺陷:(1)忽略虚拟机和虚拟机的安全特性;(2)对拓扑属性的考虑有限;(3)局限于静态虚拟机网络。然而,sp需要开发一个克服上述缺陷的嵌入框架。因此,本文提出了一种基于多属性匹配博弈(SEDViN)的动态虚拟网络请求安全嵌入框架。在SedViN中,采用基于延迟接受算法(DAA)的匹配游戏进行有效嵌入。SEDViN主要分两步操作,以获得动态vnr的安全嵌入。首先,它使用熵的组合和通过与理想解决方案(TOPSIS)的相似性来排序偏好的技术,考虑网络、安全性和系统属性,为虚拟机和服务器生成统一的排名。以此为输入,第二步采用基于一对多匹配策略的延迟接受方法进行VNR嵌入,VM嵌入和VL嵌入采用最短路径算法。通过仿真评估了SEDViN的性能,并与不同的基线方法进行了比较。模拟结果表明,SEDViN超过了基线,接受度提高了56%,收入成本比提高了44%。
{"title":"SEDViN: Secure embedding for dynamic virtual network requests using a multi-attribute matching game","authors":"T.G. Keerthan Kumar ,&nbsp;Rahul Kumar ,&nbsp;Anirudh Munnur Achal ,&nbsp;Anurag Satpathy ,&nbsp;Sourav Kanti Addya","doi":"10.1016/j.jpdc.2025.105171","DOIUrl":"10.1016/j.jpdc.2025.105171","url":null,"abstract":"<div><div>Network virtualization (NV) has gained significant attention as it allows service providers (SP) to share substrate network (SN) resources. It is achieved by partitioning them into isolated virtual network requests (VNRs) comprising interrelated virtual machines (VMs) and virtual links (VLs). Although NV provides various advantages, such as service separation, enhanced quality-of-service, reliability, and improved SN utilization, it also presents multiple scientific challenges. In this context, one pivotal challenge encountered by the researchers is secure virtual network embedding (SVNE). The SVNE encompasses assigning SN resources to components of VNR, i.e., VMs and VLs, adhering to the security demands, which is a computationally intractable problem, as it is proven to be <span><math><mi>NP</mi></math></span>-Hard. In this context, maximizing the acceptance and revenue-to-cost ratios remains of utmost priority for SPs as it not only increases the revenue but also effectively utilizes the large pool of SN resources. Though VNE is a well-researched problem, the existing literature has the following flaws: (<em>i</em>.) security features of VMs and VLs are ignored, (<em>ii</em>.) limited consideration of topological attributes, and (<em>iii</em>.) restricted to static VNRs. However, SPs need to develop an embedding framework that overcomes the abovementioned pitfalls. Therefore, this work proposes a framework <strong>S</strong>ecure <strong>E</strong>mbedding for <strong>D</strong>ynamic <strong>Vi</strong>rtual <strong>N</strong>etwork requests using a multi-attribute matching game (SEDViN). In SedViN, the deferred acceptance algorithm (DAA) based matching game is used for effective embedding. SEDViN operates primarily in two steps to obtain a secure embedding of dynamic VNRs. Firstly, it generates a unified ranking for VMs and servers using a combination of entropy and a technique for order of preference by similarity to the ideal solution (TOPSIS), considering network, security, and system attributes. Taking these as inputs, in the second step, VNR embedding is conducted using the deferred acceptance approach based on a one-to-many matching strategy for VM embedding and VL embedding using the shortest path algorithm. The performance of SEDViN is evaluated through simulations and compared against different baseline approaches. The simulation outcomes exhibit that SEDViN surpasses the baselines with a gain of 56% in the acceptance and 44% in the revenue-to-cost ratios.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"206 ","pages":"Article 105171"},"PeriodicalIF":4.0,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144996790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A scalable tensor-based MDTW approach for multi-modal time series patterns clustering 多模态时间序列模式聚类的基于可伸缩张量的MDTW方法
IF 4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-09-03 DOI: 10.1016/j.jpdc.2025.105173
Bahati Alam Sanga , Laurence T. Yang , Shunli Zhang , Zecan Yang , Nicholaus Gati
Multi-modal Time Series (MTS) is a vital ingredient to Predictive Multi-modal Artificial Intelligence (PMAI). MTS systems capture varying temporal modalities and their inherent dependencies for their accurate analytics. However, efficiently exploring these cross-modalities relationships is a challenging research due to their complexity facets and information redundancies. MTS patterns' pairwise similarity measures precede PMAI. Multi-modal Dynamic Time Warping (MDTW) is frequently explored to quantify similar MTS. Yet, it's reliant on the orthogonal conditioned local similarity measures that ignore the contributions of MTS' underlying structural relationships in the warping process and, hence, susceptible to unrealistic matching. This paper addresses the setbacks by recommending a scalable MTS recognition model, named Tensor-Slices Distance (TSD)-based MDTW (TSD-MDTW), that's subsequently advanced to two more distinct models termed Weighted modality and TSD (WmTSD-MDTW) and TSD-Mahalanobis (TSDMaha-MDTW). To quantify an alignment's cost, TSD-MDTW incorporates intrinsic spatial dependencies between modalities' coordinates, while WmTSD-MDTW relaxes information redundancies through weighing modalities based on information richness, whereas TSDMaha-MDTW embodies modalities dependencies and their coordinates' innate spatial dependencies. Besides, it proposes a scalable Tensor-based DTW (TDTW) model that re-formulates MDTW into multiple dimensions that are found paralleling warping processes. Theoretical and empirical experimental results on MTS multi-modal datasets encompassing load patterns and meteorological modalities reveal TDTW's efficiency and proposals' superior performances in terms of cluster compactness and separation over MDTW employing the state-of-the-art local similarity measures.
多模态时间序列(MTS)是预测多模态人工智能(PMAI)的重要组成部分。MTS系统捕获不同的时间模式及其固有的依赖关系,以便进行准确的分析。然而,由于其复杂性和信息冗余性,有效地探索这些跨模态关系是一项具有挑战性的研究。MTS模式的两两相似性度量先于PMAI。多模态动态时间翘曲(MDTW)经常被用于量化相似的MTS,然而,它依赖于正交条件局部相似性度量,忽略了MTS在翘曲过程中潜在结构关系的贡献,因此容易产生不切实际的匹配。本文通过推荐一种可扩展的MTS识别模型来解决这些问题,该模型被称为基于张量-切片距离(TSD)的MDTW (TSD-MDTW),该模型随后被推进到两个更不同的模型,即加权模态和TSD (WmTSD-MDTW)和TSD- mahalanobis (TSDMaha-MDTW)。为了量化对齐成本,TSD-MDTW结合了模态坐标之间固有的空间依赖关系,而WmTSD-MDTW通过基于信息丰富度的模态加权来放松信息冗余,而TSDMaha-MDTW则体现了模态依赖关系及其坐标固有的空间依赖关系。此外,提出了一种可扩展的基于张量的DTW (TDTW)模型,该模型将MDTW重新表述为多个平行翘曲过程的维度。在包含负荷模式和气象模式的MTS多模态数据集上进行的理论和实证实验结果表明,采用最先进的局部相似度度量,TDTW在聚类紧密度和分离度方面优于MDTW。
{"title":"A scalable tensor-based MDTW approach for multi-modal time series patterns clustering","authors":"Bahati Alam Sanga ,&nbsp;Laurence T. Yang ,&nbsp;Shunli Zhang ,&nbsp;Zecan Yang ,&nbsp;Nicholaus Gati","doi":"10.1016/j.jpdc.2025.105173","DOIUrl":"10.1016/j.jpdc.2025.105173","url":null,"abstract":"<div><div>Multi-modal Time Series (MTS) is a vital ingredient to Predictive Multi-modal Artificial Intelligence (PMAI). MTS systems capture varying temporal modalities and their inherent dependencies for their accurate analytics. However, efficiently exploring these cross-modalities relationships is a challenging research due to their complexity facets and information redundancies. MTS patterns' pairwise similarity measures precede PMAI. Multi-modal Dynamic Time Warping (MDTW) is frequently explored to quantify similar MTS. Yet, it's reliant on the orthogonal conditioned local similarity measures that ignore the contributions of MTS' underlying structural relationships in the warping process and, hence, susceptible to unrealistic matching. This paper addresses the setbacks by recommending a scalable MTS recognition model, named Tensor-Slices Distance (TSD)-based MDTW (TSD-MDTW), that's subsequently advanced to two more distinct models termed Weighted modality and TSD (WmTSD-MDTW) and TSD-Mahalanobis (TSDMaha-MDTW). To quantify an alignment's cost, TSD-MDTW incorporates intrinsic spatial dependencies between modalities' coordinates, while WmTSD-MDTW relaxes information redundancies through weighing modalities based on information richness, whereas TSDMaha-MDTW embodies modalities dependencies and their coordinates' innate spatial dependencies. Besides, it proposes a scalable Tensor-based DTW (TDTW) model that re-formulates MDTW into multiple dimensions that are found paralleling warping processes. Theoretical and empirical experimental results on MTS multi-modal datasets encompassing load patterns and meteorological modalities reveal TDTW's efficiency and proposals' superior performances in terms of cluster compactness and separation over MDTW employing the state-of-the-art local similarity measures.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"207 ","pages":"Article 105173"},"PeriodicalIF":4.0,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145098698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Threat to trust: A systematic review on Internet of medical things security 对信任的威胁:医疗物联网安全系统综述
IF 4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-09-03 DOI: 10.1016/j.jpdc.2025.105172
Elham Shammar , Xiaohui Cui , Ammar Zahary , Saeed Hamood Alsamhi , Mohammed A.A. Al-qaness
The Internet of Medical Things (IoMT) has transformed healthcare by enabling seamless communication among medical devices, supporting real-time monitoring, diagnostics, vital patient data tracking, improved patient care, disease prediction, early warning, and enhanced operational efficiency. Due to the sensitive nature of health-related data, the adoption of IoMT has raised significant privacy and security concerns, prompting comprehensive evaluation of IoMT security and making it a prime target for cyberattacks such as ransomware, denial-of-service (DoS) attacks, and malware. Securing IoMT requires efficient data processing across distributed systems to ensure both confidentiality and availability. Parallel and distributed computing can address scalability and performance challenges in IoMT security, particularly in enabling real-time monitoring and threat detection across multiple interconnected devices. This survey conducts a systematic literature review (SLR) of IoMT security to analyze key issues, categorize security threats, attack vectors, and vulnerabilities, and examine how emerging technologies such as blockchain, machine learning (ML), and physically unclonable functions (PUF) are strengthening IoMT security. This SLR reviews IoMT security research published between 2020 and 2024 to identify challenges and provide insights for future researchers and developers of new IoMT security models. This SLR offers practitioners and researchers guidance for developing reliable and resilient IoMT security systems in the decentralized healthcare industry.
医疗物联网(IoMT)通过实现医疗设备之间的无缝通信、支持实时监控、诊断、重要患者数据跟踪、改善患者护理、疾病预测、早期预警和提高运营效率,改变了医疗保健行业。由于健康相关数据的敏感性,IoMT的采用引发了重大的隐私和安全问题,促使对IoMT安全性进行全面评估,并使其成为勒索软件、拒绝服务(DoS)攻击和恶意软件等网络攻击的主要目标。保护IoMT需要跨分布式系统进行有效的数据处理,以确保机密性和可用性。并行和分布式计算可以解决IoMT安全中的可扩展性和性能挑战,特别是在实现跨多个互联设备的实时监控和威胁检测方面。本调查对IoMT安全性进行了系统的文献综述(SLR),以分析关键问题,对安全威胁,攻击向量和漏洞进行分类,并研究区块链,机器学习(ML)和物理不可克隆功能(PUF)等新兴技术如何加强IoMT安全性。本SLR回顾了2020年至2024年间发表的IoMT安全研究,以确定挑战,并为未来新的IoMT安全模型的研究人员和开发人员提供见解。该SLR为从业者和研究人员在分散的医疗保健行业中开发可靠和有弹性的IoMT安全系统提供了指导。
{"title":"Threat to trust: A systematic review on Internet of medical things security","authors":"Elham Shammar ,&nbsp;Xiaohui Cui ,&nbsp;Ammar Zahary ,&nbsp;Saeed Hamood Alsamhi ,&nbsp;Mohammed A.A. Al-qaness","doi":"10.1016/j.jpdc.2025.105172","DOIUrl":"10.1016/j.jpdc.2025.105172","url":null,"abstract":"<div><div>The Internet of Medical Things (IoMT) has transformed healthcare by enabling seamless communication among medical devices, supporting real-time monitoring, diagnostics, vital patient data tracking, improved patient care, disease prediction, early warning, and enhanced operational efficiency. Due to the sensitive nature of health-related data, the adoption of IoMT has raised significant privacy and security concerns, prompting comprehensive evaluation of IoMT security and making it a prime target for cyberattacks such as ransomware, denial-of-service (DoS) attacks, and malware. Securing IoMT requires efficient data processing across distributed systems to ensure both confidentiality and availability. Parallel and distributed computing can address scalability and performance challenges in IoMT security, particularly in enabling real-time monitoring and threat detection across multiple interconnected devices. This survey conducts a systematic literature review (SLR) of IoMT security to analyze key issues, categorize security threats, attack vectors, and vulnerabilities, and examine how emerging technologies such as blockchain, machine learning (ML), and physically unclonable functions (PUF) are strengthening IoMT security. This SLR reviews IoMT security research published between 2020 and 2024 to identify challenges and provide insights for future researchers and developers of new IoMT security models. This SLR offers practitioners and researchers guidance for developing reliable and resilient IoMT security systems in the decentralized healthcare industry.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"206 ","pages":"Article 105172"},"PeriodicalIF":4.0,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145026873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization of single node load balancing for lattice Boltzmann method on heterogeneous high performance computers 异构高性能计算机上晶格玻尔兹曼方法的单节点负载均衡优化
IF 4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-08-21 DOI: 10.1016/j.jpdc.2025.105169
Adrian Kummerländer , Fedor Bukreev , Dennis Teutscher , Marcio Dorn , Mathias J. Krause
Lattice Boltzmann Methods (LBM) are particularly suited for highly parallel computational fluid dynamics simulations on heterogeneous HPC systems combining CPUs and GPUs. However, the computationally dominant collide-and-stream loops commonly utilize only GPUs, leaving CPU resources underutilized. To overcome this limitation, this article proposes a novel load balancing strategy based on a genetic algorithm for bottom-up, cost-aware optimization of spatial domain decompositions. This approach generates subdomains and rank assignments inherently suited for cooperative execution on both CPUs and GPUs. Implemented in the open source framework OpenLB, the strategy is applied to turbulent flow reference cases, including a multi-physics reactive mixer. A detailed evaluation on heterogeneous HPC nodes demonstrates significant performance gains, achieving speedups of up to 87% compared to traditional GPU-only execution. This work therefore establishes cost-aware, bottom-up decomposition as a suitable strategy for exploiting the native heterogeneity of modern compute nodes.
晶格玻尔兹曼方法(LBM)特别适合于在混合cpu和gpu的异构HPC系统上进行高度并行的计算流体动力学模拟。然而,计算上占主导地位的碰撞和流循环通常只利用gpu,使CPU资源未充分利用。为了克服这一限制,本文提出了一种基于遗传算法的新型负载平衡策略,用于自下而上、成本感知的空间域分解优化。这种方法生成子域和等级分配,本质上适合在cpu和gpu上协同执行。该策略在开源框架OpenLB中实现,应用于湍流参考案例,包括多物理场反应混合器。对异构HPC节点的详细评估显示了显著的性能提升,与传统的纯gpu执行相比,实现了高达87%的速度提升。因此,这项工作建立了成本感知的、自底向上的分解,作为利用现代计算节点的本地异构性的合适策略。
{"title":"Optimization of single node load balancing for lattice Boltzmann method on heterogeneous high performance computers","authors":"Adrian Kummerländer ,&nbsp;Fedor Bukreev ,&nbsp;Dennis Teutscher ,&nbsp;Marcio Dorn ,&nbsp;Mathias J. Krause","doi":"10.1016/j.jpdc.2025.105169","DOIUrl":"10.1016/j.jpdc.2025.105169","url":null,"abstract":"<div><div>Lattice Boltzmann Methods (LBM) are particularly suited for highly parallel computational fluid dynamics simulations on heterogeneous HPC systems combining CPUs and GPUs. However, the computationally dominant collide-and-stream loops commonly utilize only GPUs, leaving CPU resources underutilized. To overcome this limitation, this article proposes a novel load balancing strategy based on a genetic algorithm for bottom-up, cost-aware optimization of spatial domain decompositions. This approach generates subdomains and rank assignments inherently suited for cooperative execution on both CPUs and GPUs. Implemented in the open source framework OpenLB, the strategy is applied to turbulent flow reference cases, including a multi-physics reactive mixer. A detailed evaluation on heterogeneous HPC nodes demonstrates significant performance gains, achieving speedups of up to 87% compared to traditional GPU-only execution. This work therefore establishes cost-aware, bottom-up decomposition as a suitable strategy for exploiting the native heterogeneity of modern compute nodes.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"206 ","pages":"Article 105169"},"PeriodicalIF":4.0,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HeaPS: Heterogeneity-aware participant selection for efficient federated learning 高效联邦学习的异构感知参与者选择
IF 4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-08-19 DOI: 10.1016/j.jpdc.2025.105168
Duo Yang , Bing Hu , Yunqi Gao , A-Long Jin , An Liu , Kwan L. Yeung , Yang You
Federated learning enables collaborative model training among numerous clients. However, existing participant/client selection methods fail to fully leverage the advantages of clients with excellent computational or communication capabilities. In this paper, we propose HeaPS, a novel Heterogeneity-aware Participant Selection framework for efficient federated learning. We introduce a finer-grained global selection algorithm to select communication-strong leaders and computation-strong members from candidate clients. The leaders are responsible for communicating with the server to reduce per-round duration, as well as contributing gradients; while the members communicate with the leaders to contribute more gradients obtained from high-utility data to the global model and improve the final model accuracy. Meanwhile, we develop a gradient migration path generation algorithm to match the optimal leader for each member. We also design the client scheduler to facilitate parallel local training of leaders and members based on gradient migration. Experimental results show that, in comparison with state-of-the-art methods, HeaPS achieves a speedup of up to 3.20× in time-to-accuracy performance and improves the final accuracy by up to 3.57%. The code for HeaPS is available at https://github.com/Dora233/HeaPS.
联邦学习支持在众多客户之间进行协作模型训练。然而,现有的参与者/客户选择方法未能充分利用具有优秀计算或通信能力的客户的优势。在本文中,我们提出了一种新的异构感知参与者选择框架,用于高效的联邦学习。我们引入了一种细粒度的全局选择算法,从候选客户中选择沟通能力强的领导者和计算能力强的成员。领导者负责与服务器沟通,以减少每轮持续时间,以及贡献梯度;同时成员与领导进行沟通,将高效用数据获得的梯度更多地贡献给全局模型,提高最终模型的精度。同时,我们开发了一种梯度迁移路径生成算法来匹配每个成员的最优领导者。我们还设计了客户端调度程序,以促进基于梯度迁移的领导者和成员的并行本地培训。实验结果表明,与目前最先进的方法相比,该方法的时间精度比(time-to-accuracy)性能提高了3.20倍,最终精度提高了3.57%。堆的代码可在https://github.com/Dora233/HeaPS上获得。
{"title":"HeaPS: Heterogeneity-aware participant selection for efficient federated learning","authors":"Duo Yang ,&nbsp;Bing Hu ,&nbsp;Yunqi Gao ,&nbsp;A-Long Jin ,&nbsp;An Liu ,&nbsp;Kwan L. Yeung ,&nbsp;Yang You","doi":"10.1016/j.jpdc.2025.105168","DOIUrl":"10.1016/j.jpdc.2025.105168","url":null,"abstract":"<div><div>Federated learning enables collaborative model training among numerous clients. However, existing participant/client selection methods fail to fully leverage the advantages of clients with excellent computational or communication capabilities. In this paper, we propose HeaPS, a novel Heterogeneity-aware Participant Selection framework for efficient federated learning. We introduce a finer-grained global selection algorithm to select communication-strong leaders and computation-strong members from candidate clients. The leaders are responsible for communicating with the server to reduce per-round duration, as well as contributing gradients; while the members communicate with the leaders to contribute more gradients obtained from high-utility data to the global model and improve the final model accuracy. Meanwhile, we develop a gradient migration path generation algorithm to match the optimal leader for each member. We also design the client scheduler to facilitate parallel local training of leaders and members based on gradient migration. Experimental results show that, in comparison with state-of-the-art methods, HeaPS achieves a speedup of up to 3.20× in time-to-accuracy performance and improves the final accuracy by up to 3.57%. The code for HeaPS is available at <span><span>https://github.com/Dora233/HeaPS</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"206 ","pages":"Article 105168"},"PeriodicalIF":4.0,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A scheduler to foster data locality for GPU and out-of-core task-based linear algebra applications 为GPU和核心外的基于任务的线性代数应用程序培育数据局部性的调度程序
IF 4 3区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2025-08-18 DOI: 10.1016/j.jpdc.2025.105170
Maxime Gonthier , Loris Marchal , Samuel Thibault
Hardware accelerators like GPUs now provide a large part of the computational power used for scientific simulations. Despite their efficacy, GPUs possess limited memory and are connected to the main memory of the machine via a bandwidth limited bus. Scientific simulations often operate on very large data, that surpasses the GPU's memory capacity. Therefore, one has to turn to out-of-core computing: data is kept in a remote, slower memory (CPU memory), and moved back and forth from/to the device memory (GPU memory), a process also present for multicore CPUs with limited memory. In both cases, data movement quickly becomes a performance bottleneck. Task-based runtime schedulers have emerged as a convenient and efficient way to manage large applications on such heterogeneous platforms. We propose a scheduler for task-based runtimes that improves data locality for out-of-core linear algebra computations, to reduce data movement. We design a data-aware strategy for both task scheduling and data eviction from limited memories. We compare this scheduler to existing schedulers in runtime systems. Using StarPU, we show that our new scheduling strategy achieves comparable performance when memory is not a constraint, and significantly better performance when application input data exceeds memory, on both GPUs and CPU cores.
像gpu这样的硬件加速器现在为科学模拟提供了很大一部分的计算能力。尽管它们的功效,gpu拥有有限的内存,并通过带宽有限的总线连接到机器的主存储器。科学模拟通常在非常大的数据上运行,这超过了GPU的内存容量。因此,必须转向核外计算:数据保存在远程、较慢的内存(CPU内存)中,并在设备内存(GPU内存)之间来回移动,这一过程也适用于内存有限的多核CPU。在这两种情况下,数据移动都会迅速成为性能瓶颈。基于任务的运行时调度器已经成为管理此类异构平台上的大型应用程序的一种方便而有效的方法。我们提出了一个基于任务的运行时调度器,它可以改善核心外线性代数计算的数据局部性,以减少数据移动。我们设计了一种数据感知策略,用于任务调度和从有限内存中提取数据。我们将此调度器与运行时系统中的现有调度器进行比较。通过使用StarPU,我们证明了我们的新调度策略在内存不受约束的情况下实现了相当的性能,并且在gpu和CPU内核上,当应用程序输入数据超过内存时显著提高了性能。
{"title":"A scheduler to foster data locality for GPU and out-of-core task-based linear algebra applications","authors":"Maxime Gonthier ,&nbsp;Loris Marchal ,&nbsp;Samuel Thibault","doi":"10.1016/j.jpdc.2025.105170","DOIUrl":"10.1016/j.jpdc.2025.105170","url":null,"abstract":"<div><div>Hardware accelerators like GPUs now provide a large part of the computational power used for scientific simulations. Despite their efficacy, GPUs possess limited memory and are connected to the main memory of the machine via a bandwidth limited bus. Scientific simulations often operate on very large data, that surpasses the GPU's memory capacity. Therefore, one has to turn to <strong>out-of-core</strong> computing: data is kept in a remote, slower memory (CPU memory), and moved back and forth from/to the device memory (GPU memory), a process also present for multicore CPUs with limited memory. In both cases, data movement quickly becomes a performance bottleneck. Task-based runtime schedulers have emerged as a convenient and efficient way to manage large applications on such heterogeneous platforms. <strong>We propose a scheduler for task-based runtimes</strong> that improves <strong>data locality</strong> for out-of-core linear algebra computations, to reduce data movement. We design a data-aware strategy for both task scheduling and data eviction from limited memories. We compare this scheduler to existing schedulers in runtime systems. Using <span>StarPU</span>, we show that our new scheduling strategy achieves comparable performance when memory is not a constraint, and significantly better performance when application input data exceeds memory, on both GPUs and CPU cores.</div></div>","PeriodicalId":54775,"journal":{"name":"Journal of Parallel and Distributed Computing","volume":"206 ","pages":"Article 105170"},"PeriodicalIF":4.0,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144866099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Parallel and Distributed Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1