首页 > 最新文献

Future Generation Computer Systems-The International Journal of Escience最新文献

英文 中文
Skip index: Supporting efficient inter-block queries and query authentication on the blockchain 跳转索引:支持区块链上高效的区块间查询和查询验证
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-10-19 DOI: 10.1016/j.future.2024.107556
Decentralized applications, the driving force behind the new Web3 paradigm, require continuous access to blockchain data. Their adoption, however, is hindered by the constantly increasing size of blockchains and the sequential scan nature of their read operations, which introduce a clear inefficiency bottleneck. Also, the growing amount of data recorded on the blockchain makes resource-constrained light nodes dependent on untrusted full nodes for fetching information, with a consequent need for query authentication protocols ensuring result integrity. Motivated by these reasons, in this paper we propose the skip index, an indexing data structure that allows users to quickly retrieve information simultaneously from multiple blocks of a blockchain. Our solution is also designed to be used as an authenticated data structure to guarantee the integrity of query results for light nodes. We discuss the theoretical properties of skip indices, propose efficient algorithms for their construction and querying, and detail their computational complexity. Finally, we assess the effectiveness of our proposal through an experimental evaluation on the Ethereum blockchain. As a reference use case, we focus on the popular CryptoKitties application and simulate a scenario where users seek to retrieve the events generated by the service. Our experimental results suggest that the use of skip indices offers a constant multiplicative speedup, thanks to search times that are at most logarithmic within a chosen search window. This allows to reduce the number of visited blocks by up to two orders of magnitude if compared to the naive sequential approach currently in use.
去中心化应用是 Web3 新模式的驱动力,需要持续访问区块链数据。然而,区块链规模的不断扩大及其读取操作的顺序扫描性质阻碍了这些应用的采用,这带来了明显的低效瓶颈。此外,区块链上记录的数据量越来越大,使得资源有限的轻节点在获取信息时依赖于不受信任的全节点,因此需要查询验证协议来确保结果的完整性。基于这些原因,我们在本文中提出了跳转索引,这是一种索引数据结构,允许用户同时从区块链的多个区块中快速检索信息。我们的解决方案还可用作认证数据结构,以保证轻节点查询结果的完整性。我们讨论了跳转索引的理论属性,提出了构建和查询跳转索引的高效算法,并详细介绍了其计算复杂性。最后,我们通过在以太坊区块链上进行实验评估来评估我们建议的有效性。作为参考用例,我们将重点放在流行的 CryptoKitties 应用程序上,并模拟了用户寻求检索服务生成的事件的场景。我们的实验结果表明,由于在选定的搜索窗口内搜索时间最多为对数,因此使用跳转索引可以不断提高速度。与目前使用的简单顺序方法相比,这种方法最多可将访问区块的数量减少两个数量级。
{"title":"Skip index: Supporting efficient inter-block queries and query authentication on the blockchain","authors":"","doi":"10.1016/j.future.2024.107556","DOIUrl":"10.1016/j.future.2024.107556","url":null,"abstract":"<div><div>Decentralized applications, the driving force behind the new Web3 paradigm, require continuous access to blockchain data. Their adoption, however, is hindered by the constantly increasing size of blockchains and the sequential scan nature of their read operations, which introduce a clear inefficiency bottleneck. Also, the growing amount of data recorded on the blockchain makes resource-constrained light nodes dependent on untrusted full nodes for fetching information, with a consequent need for query authentication protocols ensuring result integrity. Motivated by these reasons, in this paper we propose the skip index, an indexing data structure that allows users to quickly retrieve information simultaneously from multiple blocks of a blockchain. Our solution is also designed to be used as an authenticated data structure to guarantee the integrity of query results for light nodes. We discuss the theoretical properties of skip indices, propose efficient algorithms for their construction and querying, and detail their computational complexity. Finally, we assess the effectiveness of our proposal through an experimental evaluation on the Ethereum blockchain. As a reference use case, we focus on the popular CryptoKitties application and simulate a scenario where users seek to retrieve the events generated by the service. Our experimental results suggest that the use of skip indices offers a constant multiplicative speedup, thanks to search times that are at most logarithmic within a chosen search window. This allows to reduce the number of visited blocks by up to two orders of magnitude if compared to the naive sequential approach currently in use.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EASL: Enhanced append-only skip list index for agile block data retrieval on blockchain EASL:用于区块链上敏捷区块数据检索的增强型仅附加跳转列表索引
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-10-18 DOI: 10.1016/j.future.2024.107554
The weakness of blockchain is widely recognised as the linear, temporal cost required for retrieving data due to the sequential structure of data blocks. To address this, conventional approaches have relied on database indexing techniques applied to each individual replica copy of a blockchain. However, this only partially addresses the problem, because if the index is not distributed it is not available for devices in the blockchain network. If an index is to be incorporated and distributed within blockchain, the unique attribute of immutability necessitates a more innovative approach. To that end, we propose an Enhanced Append-only Skip List (EASL). This specialised indexing technique utilises binary search with skip lists in blockchain, resulting in a sublinear cost for data retrieval. The EASL indexing technique is maintained by each newly appended blockchain block and offers enhanced readability and robustness using an explicitly recorded index structure. Our proposed technique is 42% more efficient in computing and 60% more efficient in storage consumption than its predecessor, the Deterministic Append-only Skip List (DASL) indexing technique. This is achieved through agile data retrieval, resulting in energy cost savings from less computational effort to maintain the index, and less network bandwidth to retrieve blockchain data. The code for the proposed technique is publicly available on GitHub {https://github.com/jarednewell/EASL/}, to expedite future research and encourage the practical application of this effectual data index.
区块链的弱点被广泛认为是,由于数据块的顺序结构,检索数据需要线性的时间成本。为了解决这个问题,传统方法依赖于应用于区块链每个副本的数据库索引技术。然而,这只能部分解决问题,因为如果索引不是分布式的,区块链网络中的设备就无法使用。如果要在区块链中整合和分发索引,那么不变性这一独特属性就要求采用更具创新性的方法。为此,我们提出了增强型仅附加跳转列表(EASL)。这种专门的索引技术在区块链中利用二进制搜索和跳转列表,从而使数据检索的成本降至亚线性。EASL 索引技术由每个新添加的区块链区块维护,利用明确记录的索引结构增强了可读性和稳健性。我们提出的技术比其前身--确定性只附加跳转列表(DASL)索引技术--计算效率提高了 42%,存储消耗效率提高了 60%。这是通过敏捷的数据检索实现的,从而减少了维护索引的计算工作量和检索区块链数据的网络带宽,节约了能源成本。该技术的代码已在 GitHub {https://github.com/jarednewell/EASL/} 上公开,以加快未来的研究并鼓励这种有效数据索引的实际应用。
{"title":"EASL: Enhanced append-only skip list index for agile block data retrieval on blockchain","authors":"","doi":"10.1016/j.future.2024.107554","DOIUrl":"10.1016/j.future.2024.107554","url":null,"abstract":"<div><div>The weakness of blockchain is widely recognised as the linear, temporal cost required for retrieving data due to the sequential structure of data blocks. To address this, conventional approaches have relied on database indexing techniques applied to each individual replica copy of a blockchain. However, this only partially addresses the problem, because if the index is not distributed it is not available for devices in the blockchain network. If an index is to be incorporated and distributed within blockchain, the unique attribute of immutability necessitates a more innovative approach. To that end, we propose an Enhanced Append-only Skip List (EASL). This specialised indexing technique utilises binary search with skip lists in blockchain, resulting in a sublinear cost for data retrieval. The EASL indexing technique is maintained by each newly appended blockchain block and offers enhanced readability and robustness using an explicitly recorded index structure. Our proposed technique is 42% more efficient in computing and 60% more efficient in storage consumption than its predecessor, the Deterministic Append-only Skip List (DASL) indexing technique. This is achieved through agile data retrieval, resulting in energy cost savings from less computational effort to maintain the index, and less network bandwidth to retrieve blockchain data. The code for the proposed technique is publicly available on GitHub {<span><span>https://github.com/jarednewell/EASL/</span><svg><path></path></svg></span>}, to expedite future research and encourage the practical application of this effectual data index.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Event log extraction methodology for Ethereum applications 以太坊应用程序的事件日志提取方法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-10-18 DOI: 10.1016/j.future.2024.107566
The adoption of smart contracts in decentralized blockchain-based applications enables reliable and certified audits. These audits allow the extraction of valuable information from blockchains, which can be used to reconstruct the execution of the application and facilitate advanced analyses. One of the most commonly used techniques in this context is process mining, which leverages event logs to trace and accurately represent the process execution of applications. However, extracting execution data from blockchains poses significant challenges, and the current methodologies developed have some limitations. Most approaches are tailored to specific use cases, requiring that analysis techniques are defined during the smart contract’s development. Other techniques are applied a posteriori, relying on blockchain events that often lack a standardized format. This absence of standardization requires complex processing steps to correlate logs with the executed actions and such approaches are not universally applicable to all smart contracts on the blockchain, further limiting their scope. Lastly, none of the existing techniques can extract information from event logs embedded in internal transactions of smart contracts.
To address these limitations, we propose EveLog an application-agnostic methodology that can be applied to any EVM-compatible application without predefined constraints. Its primary goal is to extract information from smart contracts, capturing both public and internal transactions, and organizing the results into a structured XES event log. The EveLog methodology consists of five key steps: (i) extraction of data from smart contract transactions, (ii) decoding raw data, (iii) selection of sorting criteria, (iv) construction of traces, and (v) generation of the XES event log. EveLog has been implemented in a client–server application and tested on existing solutions, specifically the CryptoKitties application, a blockchain-based game on the Ethereum blockchain. The study was conducted using 12,996 blocks, including over 8000 real transactions from the Ethereum mainnet.
在基于区块链的去中心化应用中采用智能合约可以实现可靠的认证审计。通过这些审计,可以从区块链中提取有价值的信息,这些信息可用于重建应用程序的执行并促进高级分析。在这种情况下,最常用的技术之一是流程挖掘,它利用事件日志来跟踪和准确表示应用程序的流程执行情况。然而,从区块链中提取执行数据面临着巨大挑战,目前开发的方法也存在一些局限性。大多数方法都是为特定用例量身定制的,需要在智能合约开发过程中定义分析技术。其他技术是事后应用的,依赖于通常缺乏标准化格式的区块链事件。这种标准化的缺失需要复杂的处理步骤才能将日志与执行的操作关联起来,而且这种方法并不适用于区块链上的所有智能合约,从而进一步限制了其应用范围。最后,现有技术都无法从嵌入智能合约内部交易的事件日志中提取信息。为了解决这些局限性,我们提出了 EveLog,这是一种应用无关的方法,可以应用于任何 EVM 兼容的应用,而无需预定义的限制。它的主要目标是从智能合约中提取信息,捕捉公共和内部交易,并将结果整理成结构化的 XES 事件日志。EveLog 方法包括五个关键步骤:(i) 从智能合约交易中提取数据,(ii) 对原始数据进行解码,(iii) 选择排序标准,(iv) 构建轨迹,(v) 生成 XES 事件日志。EveLog 已在客户端-服务器应用程序中实施,并在现有解决方案上进行了测试,特别是 CryptoKitties 应用程序,这是一款基于以太坊区块链的区块链游戏。这项研究使用了 12996 个区块,包括来自以太坊主网的 8000 多笔真实交易。
{"title":"Event log extraction methodology for Ethereum applications","authors":"","doi":"10.1016/j.future.2024.107566","DOIUrl":"10.1016/j.future.2024.107566","url":null,"abstract":"<div><div>The adoption of smart contracts in decentralized blockchain-based applications enables reliable and certified audits. These audits allow the extraction of valuable information from blockchains, which can be used to reconstruct the execution of the application and facilitate advanced analyses. One of the most commonly used techniques in this context is process mining, which leverages event logs to trace and accurately represent the process execution of applications. However, extracting execution data from blockchains poses significant challenges, and the current methodologies developed have some limitations. Most approaches are tailored to specific use cases, requiring that analysis techniques are defined during the smart contract’s development. Other techniques are applied a posteriori, relying on blockchain events that often lack a standardized format. This absence of standardization requires complex processing steps to correlate logs with the executed actions and such approaches are not universally applicable to all smart contracts on the blockchain, further limiting their scope. Lastly, none of the existing techniques can extract information from event logs embedded in internal transactions of smart contracts.</div><div>To address these limitations, we propose EveLog an application-agnostic methodology that can be applied to any EVM-compatible application without predefined constraints. Its primary goal is to extract information from smart contracts, capturing both public and internal transactions, and organizing the results into a structured XES event log. The EveLog methodology consists of five key steps: (i) extraction of data from smart contract transactions, (ii) decoding raw data, (iii) selection of sorting criteria, (iv) construction of traces, and (v) generation of the XES event log. EveLog has been implemented in a client–server application and tested on existing solutions, specifically the CryptoKitties application, a blockchain-based game on the Ethereum blockchain. The study was conducted using 12,996 blocks, including over 8000 real transactions from the Ethereum mainnet.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Confidential outsourced support vector machine learning based on well-separated structure 基于良好分离结构的机密外包支持向量机学习
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-10-18 DOI: 10.1016/j.future.2024.107564
Support Vector Machine (SVM) has revolutionized various domains and achieved remarkable successes. This progress relies on subtle algorithms and more on large training samples. However, the massive data collection introduces security concerns. To facilitate secure integration of data efficiently for building an accurate SVM classifier, we present a non-interactive protocol for privacy-preserving SVM, named NPSVMT. Specifically, we define a new well-separated structure for computing gradients that can decouple the fusion matter between user data and model parameters, allowing data providers to outsource the collaborative learning task to the cloud. As a result, NPSVMT is capable of removing the multiple communications and eliminating the straggler’s effect (waiting for the last), thereby going beyond those developed with interactive methods, e.g., federated learning. To further decrease the data traffic, we introduce a high-efficient coding method to compress and parse training data. In addition, unlike outsourced schemes based on homomorphic encryption or secret sharing, NPSVMT exploits functional encryption to maintain the data confidentiality, achieving dropout-tolerant secure aggregation. The implementations verify that NPSVMT is faster by orders of magnitude than the existing privacy-preserving SVM schemes on benchmark datasets.
支持向量机(SVM)给各个领域带来了革命性的变化,并取得了令人瞩目的成就。这一进步依赖于精妙的算法,更依赖于大量的训练样本。然而,海量数据收集带来了安全问题。为了促进数据的安全整合,高效地构建准确的 SVM 分类器,我们提出了一种用于保护隐私的 SVM 的非交互式协议,命名为 NPSVMT。具体来说,我们定义了一种新的梯度计算分离结构,它可以将用户数据和模型参数之间的融合事项解耦,允许数据提供商将协作学习任务外包给云。因此,NPSVMT 能够消除多重通信,并消除滞后效应(等待最后一个),从而超越那些用交互式方法(如联合学习)开发的方法。为了进一步减少数据流量,我们引入了一种高效的编码方法来压缩和解析训练数据。此外,与基于同态加密或秘密共享的外包方案不同,NPSVMT 利用功能加密来维护数据机密性,实现了容错安全聚合。实施验证了 NPSVMT 在基准数据集上比现有的隐私保护 SVM 方案快了几个数量级。
{"title":"Confidential outsourced support vector machine learning based on well-separated structure","authors":"","doi":"10.1016/j.future.2024.107564","DOIUrl":"10.1016/j.future.2024.107564","url":null,"abstract":"<div><div>Support Vector Machine (SVM) has revolutionized various domains and achieved remarkable successes. This progress relies on subtle algorithms and more on large training samples. However, the massive data collection introduces security concerns. To facilitate secure integration of data efficiently for building an accurate SVM classifier, we present a non-interactive protocol for privacy-preserving SVM, named <em>NPSVMT</em>. Specifically, we define a new well-separated structure for computing gradients that can decouple the fusion matter between user data and model parameters, allowing data providers to outsource the collaborative learning task to the cloud. As a result, <em>NPSVMT</em> is capable of removing the multiple communications and eliminating the straggler’s effect (waiting for the last), thereby going beyond those developed with interactive methods, e.g., federated learning. To further decrease the data traffic, we introduce a high-efficient coding method to compress and parse training data. In addition, unlike outsourced schemes based on homomorphic encryption or secret sharing, <em>NPSVMT</em> exploits functional encryption to maintain the data confidentiality, achieving dropout-tolerant secure aggregation. The implementations verify that <em>NPSVMT</em> is faster by orders of magnitude than the existing privacy-preserving SVM schemes on benchmark datasets.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedDA: Resource-adaptive federated learning with dual-alignment aggregation optimization for heterogeneous edge devices FedDA:针对异构边缘设备的资源自适应联合学习与双对齐聚合优化
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-10-15 DOI: 10.1016/j.future.2024.107551
Federated learning (FL) is an emerging distributed learning paradigm that allows multiple clients to collaborate on training a global model without sharing their local data. However, in practical heterogeneous edge device scenarios, FL faces the challenges of system heterogeneity and data heterogeneity, which leads to unfair participation and degraded global model performance. In this paper, we introduce FedDA, a resource-adaptive FL framework, which adapts to the client’s computing resources by assigning heterogeneous models of different sizes. To improve the performance of heterogeneous model aggregation and adjust to non-independent and identically distributed (non-i.i.d.) data, we propose a dual-alignment aggregation optimization method, i.e., parameter feature space alignment and output space alignment. Specifically, FedDA exploits the permutation symmetry property of weight space to permutate the model parameter positions via an adaptive layer-wise matching method, thereby aligning models with significant deviations in parameter feature space. FedDA mitigates the imbalance in parameter quantity between smaller and larger models through parameter expansion. Additionally, FedDA maps client labels into a uniform embedding space through output space alignment, thus reducing model parameter deviations due to non-i.i.d. data without additional client-side computing overhead. We evaluate the performance of FedDA on benchmark datasets, including FashionMNIST, CIFAR10, CIFAR100 and AGNews. Experimental results demonstrate that FedDA achieves up to 8.71% improvement in model accuracy compared to baseline methods, highlighting its effectiveness in addressing the challenges of heterogeneity.
联合学习(FL)是一种新兴的分布式学习模式,它允许多个客户端在不共享本地数据的情况下合作训练一个全局模型。然而,在实际的异构边缘设备场景中,FL 面临着系统异构和数据异构的挑战,这导致了不公平的参与和全局模型性能的下降。本文介绍了一种资源自适应 FL 框架 FedDA,它通过分配不同大小的异构模型来适应客户端的计算资源。为了提高异构模型聚合的性能并适应非独立且同分布(non-i.d.)的数据,我们提出了一种双重对齐聚合优化方法,即参数特征空间对齐和输出空间对齐。具体来说,FedDA 利用权重空间的置换对称特性,通过自适应层级匹配方法对模型参数位置进行置换,从而对参数特征空间存在显著偏差的模型进行对齐。FedDA 通过参数扩展缓解了较小模型和较大模型之间参数数量的不平衡。此外,FedDA 还通过输出空间对齐将客户端标签映射到统一的嵌入空间,从而在不增加客户端计算开销的情况下,减少非同义数据导致的模型参数偏差。我们在基准数据集(包括 FashionMNIST、CIFAR10、CIFAR100 和 AGNews)上评估了 FedDA 的性能。实验结果表明,与基线方法相比,FedDA 的模型准确率提高了 8.71%,突出了它在应对异质性挑战方面的有效性。
{"title":"FedDA: Resource-adaptive federated learning with dual-alignment aggregation optimization for heterogeneous edge devices","authors":"","doi":"10.1016/j.future.2024.107551","DOIUrl":"10.1016/j.future.2024.107551","url":null,"abstract":"<div><div>Federated learning (FL) is an emerging distributed learning paradigm that allows multiple clients to collaborate on training a global model without sharing their local data. However, in practical heterogeneous edge device scenarios, FL faces the challenges of system heterogeneity and data heterogeneity, which leads to unfair participation and degraded global model performance. In this paper, we introduce FedDA, a resource-adaptive FL framework, which adapts to the client’s computing resources by assigning heterogeneous models of different sizes. To improve the performance of heterogeneous model aggregation and adjust to non-independent and identically distributed (non-i.i.d.) data, we propose a dual-alignment aggregation optimization method, i.e., parameter feature space alignment and output space alignment. Specifically, FedDA exploits the permutation symmetry property of weight space to permutate the model parameter positions via an adaptive layer-wise matching method, thereby aligning models with significant deviations in parameter feature space. FedDA mitigates the imbalance in parameter quantity between smaller and larger models through parameter expansion. Additionally, FedDA maps client labels into a uniform embedding space through output space alignment, thus reducing model parameter deviations due to non-i.i.d. data without additional client-side computing overhead. We evaluate the performance of FedDA on benchmark datasets, including FashionMNIST, CIFAR10, CIFAR100 and AGNews. Experimental results demonstrate that FedDA achieves up to 8.71% improvement in model accuracy compared to baseline methods, highlighting its effectiveness in addressing the challenges of heterogeneity.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FeL-MAR: Federated learning based multi resident activity recognition in IoT enabled smart homes FeL-MAR:物联网智能家居中基于联合学习的多住户活动识别
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-10-11 DOI: 10.1016/j.future.2024.107552
This study explores and proposes the use of a Federated Learning (FL) based approach for recognizing multi-resident activities in smart homes utilizing a diverse array of data collected from Internet of Things (IoT) sensors. FL model is pivotal in ensuring the utmost privacy of user data fostering decentralized learning environments and allowing individual residents to retain control over their sensitive information. The main objective of this paper is to accurately recognize and interpret individual activities by allowing them to maintain sovereignty over their confidential information. This will help to provide a services that enrich assisted living experiences within the smart homes. The proposed system is designed to be adaptable learning from the multi-residential behaviors to predict and respond intelligently to the residents needs and preferences promoting a harmonious and sustainable living environment while maintaining privacy, confidentiality and control over the data collected from sensors. The proposed FeL-MAR model demonstrates superior performance in activity recognition within multi-resident smart homes, outperforming other models with its high accuracy and precision while maintaining user privacy. It suggest an effective use of FL and IoT sensors marks a significant advancement in smart home technologies enhancing both efficiency and user experience without compromising data security.
本研究探索并提出了一种基于联合学习(FL)的方法,利用从物联网(IoT)传感器收集的各种数据识别智能家居中的多住户活动。联邦学习模型在确保用户数据的最大隐私性、促进分散学习环境以及允许居民个人保留对其敏感信息的控制方面发挥着关键作用。本文的主要目标是准确识别和解释个人活动,允许他们对自己的机密信息保持主权。这将有助于在智能家居中提供丰富辅助生活体验的服务。所提出的系统设计具有很强的适应性,可以从多住宅行为中学习,预测并智能响应居民的需求和偏好,促进和谐、可持续的生活环境,同时维护从传感器收集到的数据的隐私性、保密性和控制性。所提出的 FeL-MAR 模型在多住户智能家居内的活动识别方面表现出色,其准确性和精确性优于其他模型,同时还能维护用户隐私。它建议有效利用 FL 和物联网传感器,这标志着智能家居技术取得了重大进展,在提高效率和用户体验的同时又不影响数据安全。
{"title":"FeL-MAR: Federated learning based multi resident activity recognition in IoT enabled smart homes","authors":"","doi":"10.1016/j.future.2024.107552","DOIUrl":"10.1016/j.future.2024.107552","url":null,"abstract":"<div><div>This study explores and proposes the use of a Federated Learning (FL) based approach for recognizing multi-resident activities in smart homes utilizing a diverse array of data collected from Internet of Things (IoT) sensors. FL model is pivotal in ensuring the utmost privacy of user data fostering decentralized learning environments and allowing individual residents to retain control over their sensitive information. The main objective of this paper is to accurately recognize and interpret individual activities by allowing them to maintain sovereignty over their confidential information. This will help to provide a services that enrich assisted living experiences within the smart homes. The proposed system is designed to be adaptable learning from the multi-residential behaviors to predict and respond intelligently to the residents needs and preferences promoting a harmonious and sustainable living environment while maintaining privacy, confidentiality and control over the data collected from sensors. The proposed FeL-MAR model demonstrates superior performance in activity recognition within multi-resident smart homes, outperforming other models with its high accuracy and precision while maintaining user privacy. It suggest an effective use of FL and IoT sensors marks a significant advancement in smart home technologies enhancing both efficiency and user experience without compromising data security.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142551981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LGTDA: Bandwidth exhaustion attack on Ethereum via dust transactions LGTDA:通过尘埃交易对以太坊进行带宽耗尽攻击
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-10-10 DOI: 10.1016/j.future.2024.107549
Dust attacks typically involve sending a large number of low-value transactions to numerous addresses, aiming to facilitate transaction tracking and undermine privacy, while simultaneously disrupting the market and increasing transaction delays. These transactions not only impact the network but also incur significant costs. This paper introduces a low-cost attack method called LGTDA, which achieves network partitioning through dust attacks. This method hinders block synchronization by consuming node bandwidth, leading to denial of service (DoS) for nodes and eventually causing large-scale network partitioning. In LGTDA, the attacker does not need to have real control over the nodes in the network, nor is there a requirement for the number of peer connections to the nodes; the attack can even be initiated by simply invoking RPC services to send transactions. Under the condition of ensuring the validity of the attack transactions, the LGTDA attack sends a large volume of low-value, high-frequency dust transactions to the network, relying on nodes for global broadcasting. This sustained attack can significantly impede the growth of block heights among nodes, resulting in network partitioning. We discuss the implications of the LGTDA attack, including its destructive capability, low cost, and ease of execution. Additionally, we analyze the limitations of this attack. Compared to grid lighting attacks, the LGTDA attack has a broader impact range and is not limited by the positional relationship with the victim node. Through experimental validation in a controlled environment, we confirm the effectiveness of this attack.
灰尘攻击通常涉及向众多地址发送大量低价值交易,旨在促进交易跟踪和破坏隐私,同时扰乱市场并增加交易延迟。这些交易不仅会影响网络,还会产生大量成本。本文介绍了一种名为 LGTDA 的低成本攻击方法,通过灰尘攻击实现网络分割。这种方法通过消耗节点带宽阻碍区块同步,导致节点拒绝服务(DoS),最终造成大规模网络分割。在 LGTDA 中,攻击者不需要真正控制网络中的节点,也不需要节点的对等连接数,甚至只需调用 RPC 服务发送事务即可发起攻击。在确保攻击交易有效性的条件下,LGTDA 攻击依靠节点进行全局广播,向网络发送大量低价值、高频率的粉尘交易。这种持续攻击会极大地阻碍节点间区块高度的增长,导致网络分割。我们讨论了 LGTDA 攻击的影响,包括其破坏能力、低成本和易执行性。此外,我们还分析了这种攻击的局限性。与网格照明攻击相比,LGTDA 攻击的影响范围更广,且不受受害节点位置关系的限制。通过在受控环境中进行实验验证,我们证实了这种攻击的有效性。
{"title":"LGTDA: Bandwidth exhaustion attack on Ethereum via dust transactions","authors":"","doi":"10.1016/j.future.2024.107549","DOIUrl":"10.1016/j.future.2024.107549","url":null,"abstract":"<div><div>Dust attacks typically involve sending a large number of low-value transactions to numerous addresses, aiming to facilitate transaction tracking and undermine privacy, while simultaneously disrupting the market and increasing transaction delays. These transactions not only impact the network but also incur significant costs. This paper introduces a low-cost attack method called LGTDA, which achieves network partitioning through dust attacks. This method hinders block synchronization by consuming node bandwidth, leading to denial of service (DoS) for nodes and eventually causing large-scale network partitioning. In LGTDA, the attacker does not need to have real control over the nodes in the network, nor is there a requirement for the number of peer connections to the nodes; the attack can even be initiated by simply invoking RPC services to send transactions. Under the condition of ensuring the validity of the attack transactions, the LGTDA attack sends a large volume of low-value, high-frequency dust transactions to the network, relying on nodes for global broadcasting. This sustained attack can significantly impede the growth of block heights among nodes, resulting in network partitioning. We discuss the implications of the LGTDA attack, including its destructive capability, low cost, and ease of execution. Additionally, we analyze the limitations of this attack. Compared to grid lighting attacks, the LGTDA attack has a broader impact range and is not limited by the positional relationship with the victim node. Through experimental validation in a controlled environment, we confirm the effectiveness of this attack.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correlating node centrality metrics with node resilience in self-healing systems with limited neighbourhood information 在邻域信息有限的自愈系统中,将节点中心度量与节点复原力相关联
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-10-10 DOI: 10.1016/j.future.2024.107553
Resilient systems must self-heal their components and connections to maintain their topology and function when failures occur. This ability becomes essential to many networked and distributed systems, e.g., virtualisation platforms, cloud services, microservice architectures and decentralised algorithms. This paper builds upon a self-healing approach where failed nodes are recreated and reconnected automatically based on topology information, which is maintained within each node’s neighbourhood. The paper proposes two novel contributions. First, it offers a generic method for establishing the minimum size of a network neighbourhood to be known by each node in order to recover the system’s component interconnection topology under a certain probability of node failure. This improves the previous proposal by reducing resource consumption, as only local information is communication and stored. Second, it adopts analysis techniques from complex networks theory to correlate a node’s recovery probability with its closeness centrality within the self-healing system. This allows strengthening a system’s resilience by analysing its topological characteristics and rewiring weakly-connected nodes. These contributions are supported by extensive simulation experiments on different systems with various topological characteristics. Obtained results confirm that nodes which propagate their topology information to more neighbours are more likely to be recovered; while requiring more resources. The proposed contributions can help practitioners to: identify the most fragile nodes in their distributed systems; consider corrective measures by increasing each node’s connectivity; and, establish a suitable compromise between system resilience and costs.
弹性系统必须自我修复其组件和连接,以便在发生故障时保持拓扑结构和功能。这种能力对于许多网络化和分布式系统(如虚拟化平台、云服务、微服务架构和分散式算法)来说至关重要。本文以一种自愈方法为基础,在这种方法中,故障节点会根据拓扑信息自动重建和重新连接,拓扑信息保存在每个节点的邻域中。本文提出了两个新贡献。首先,它提供了一种通用方法,用于确定每个节点已知网络邻域的最小大小,以便在一定的节点故障概率下恢复系统的组件互连拓扑。这改进了之前的建议,因为只需通信和存储本地信息,从而减少了资源消耗。其次,它采用了复杂网络理论的分析技术,将节点的恢复概率与其在自愈系统中的接近中心度相关联。这样就可以通过分析系统的拓扑特征和重新连接弱连接节点来加强系统的恢复能力。在具有各种拓扑特征的不同系统上进行的大量模拟实验支持了这些贡献。获得的结果证实,向更多邻居传播拓扑信息的节点更有可能被恢复,但需要更多的资源。提出的建议可帮助从业人员:识别分布式系统中最脆弱的节点;考虑通过增加每个节点的连通性来采取纠正措施;以及在系统恢复能力和成本之间建立适当的折中。
{"title":"Correlating node centrality metrics with node resilience in self-healing systems with limited neighbourhood information","authors":"","doi":"10.1016/j.future.2024.107553","DOIUrl":"10.1016/j.future.2024.107553","url":null,"abstract":"<div><div>Resilient systems must self-heal their components and connections to maintain their topology and function when failures occur. This ability becomes essential to many networked and distributed systems, e.g., virtualisation platforms, cloud services, microservice architectures and decentralised algorithms. This paper builds upon a self-healing approach where failed nodes are recreated and reconnected automatically based on topology information, which is maintained within each node’s neighbourhood. The paper proposes two novel contributions. First, it offers a generic method for establishing the minimum size of a network neighbourhood to be known by each node in order to recover the system’s component interconnection topology under a certain probability of node failure. This improves the previous proposal by reducing resource consumption, as only local information is communication and stored. Second, it adopts analysis techniques from complex networks theory to correlate a node’s recovery probability with its closeness centrality within the self-healing system. This allows strengthening a system’s resilience by analysing its topological characteristics and rewiring weakly-connected nodes. These contributions are supported by extensive simulation experiments on different systems with various topological characteristics. Obtained results confirm that nodes which propagate their topology information to more neighbours are more likely to be recovered; while requiring more resources. The proposed contributions can help practitioners to: identify the most fragile nodes in their distributed systems; consider corrective measures by increasing each node’s connectivity; and, establish a suitable compromise between system resilience and costs.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature Bagging with Nested Rotations (FBNR) for anomaly detection in multivariate time series 用嵌套旋转(FBNR)对多变量时间序列进行特征袋化异常检测
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-10-09 DOI: 10.1016/j.future.2024.107545
Detecting anomalies in multivariate time series poses a significant challenge across various domains. The infrequent occurrence of anomalies in real-world data, as well as the lack of a large number of annotated samples, makes it a complex task for classification algorithms. Deep Neural Network approaches, based on Long Short-Term Memory (LSTMs), Autoencoders, and Variational Autoencoders (VAEs), among others, prove effective with handling imbalanced data. However, the same does not follow when such algorithms are applied on multivariate time-series, as their performance degrades significantly. Our main hypothesis is that the above is due to anomalies stemming from a small subset of the feature set. To mitigate the above issues in the multivariate setting, we propose forming an ensemble of base models by combining different feature selection and transformation techniques. The proposed processing pipeline includes applying a Feature Bagging techniques on multiple individual models, which considers separate feature subsets for each specific model. These subsets are then partitioned and transformed using multiple nested rotations derived from Principal Component Analysis (PCA). This approach aims to identify anomalies that arise from only a small portion of the feature set while also introduces diversity by transforming the subspaces. Each model provides an anomaly score, which are then aggregated, via an unsupervised decision fusion model. A semi-supervised fusion model was also explored, in which a Logistic Regressor was applied on the individual model outputs. The proposed methodology is evaluated on the Skoltech Anomaly Benchmark (SKAB), containing multivariate time series related to water flow in a closed circuit, as well as the Server Machine Dataset (SMD), which was collected from a large Internet company. The experimental results reveal that the proposed ensemble technique surpasses state-of-the-art algorithms. The unsupervised approach demonstrated a performance improvement of 2% for SKAB and 3% for SMD, compared to the baseline models. In the semi-supervised approach, the proposed method achieved a minimum of 10% improvement in terms of anomaly detection accuracy.
在多元时间序列中检测异常现象是各个领域面临的一项重大挑战。由于异常情况在现实世界的数据中并不经常出现,而且缺乏大量有注释的样本,因此对于分类算法来说是一项复杂的任务。基于长短期记忆(LSTM)、自动编码器和变异自动编码器(VAE)等的深度神经网络方法被证明能有效处理不平衡数据。然而,当这些算法应用于多变量时间序列时,情况就不一样了,因为它们的性能会显著下降。我们的主要假设是,上述情况是由于一小部分特征集产生了异常。为了在多变量环境中缓解上述问题,我们建议结合不同的特征选择和转换技术,形成一个基础模型集合。建议的处理管道包括在多个单独模型上应用特征袋技术,该技术为每个特定模型考虑单独的特征子集。然后使用主成分分析 (PCA) 得出的多个嵌套旋转对这些子集进行分割和转换。这种方法旨在识别仅由特征集一小部分产生的异常,同时通过转换子空间引入多样性。每个模型都会提供一个异常得分,然后通过无监督决策融合模型进行汇总。此外,还探索了一种半监督融合模型,即在单个模型输出上应用 Logistic 回归器。建议的方法在 Skoltech 异常基准(SKAB)和服务器机器数据集(SMD)上进行了评估,前者包含与闭合电路中水流相关的多变量时间序列,后者是从一家大型互联网公司收集的。实验结果表明,所提出的集合技术超越了最先进的算法。与基线模型相比,无监督方法在 SKAB 和 SMD 上分别提高了 2% 和 3% 的性能。在半监督方法中,所提出的方法在异常检测准确率方面至少提高了 10%。
{"title":"Feature Bagging with Nested Rotations (FBNR) for anomaly detection in multivariate time series","authors":"","doi":"10.1016/j.future.2024.107545","DOIUrl":"10.1016/j.future.2024.107545","url":null,"abstract":"<div><div>Detecting anomalies in multivariate time series poses a significant challenge across various domains. The infrequent occurrence of anomalies in real-world data, as well as the lack of a large number of annotated samples, makes it a complex task for classification algorithms. Deep Neural Network approaches, based on Long Short-Term Memory (LSTMs), Autoencoders, and Variational Autoencoders (VAEs), among others, prove effective with handling imbalanced data. However, the same does not follow when such algorithms are applied on multivariate time-series, as their performance degrades significantly. Our main hypothesis is that the above is due to anomalies stemming from a small subset of the feature set. To mitigate the above issues in the multivariate setting, we propose forming an ensemble of base models by combining different feature selection and transformation techniques. The proposed processing pipeline includes applying a Feature Bagging techniques on multiple individual models, which considers separate feature subsets for each specific model. These subsets are then partitioned and transformed using multiple nested rotations derived from Principal Component Analysis (PCA). This approach aims to identify anomalies that arise from only a small portion of the feature set while also introduces diversity by transforming the subspaces. Each model provides an anomaly score, which are then aggregated, via an unsupervised decision fusion model. A semi-supervised fusion model was also explored, in which a Logistic Regressor was applied on the individual model outputs. The proposed methodology is evaluated on the Skoltech Anomaly Benchmark (SKAB), containing multivariate time series related to water flow in a closed circuit, as well as the Server Machine Dataset (SMD), which was collected from a large Internet company. The experimental results reveal that the proposed ensemble technique surpasses state-of-the-art algorithms. The unsupervised approach demonstrated a performance improvement of 2% for SKAB and 3% for SMD, compared to the baseline models. In the semi-supervised approach, the proposed method achieved a minimum of 10% improvement in terms of anomaly detection accuracy.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing interconnection network topology for chiplet-based systems: An automated design framework 增强基于芯片组系统的互连网络拓扑结构:自动设计框架
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2024-10-05 DOI: 10.1016/j.future.2024.107547
Chiplet-based systems integrate discrete chips on an interposer and use the interconnection network to enable communication between different components. The topology of the interconnection network poses a significant challenge to overall performance, as it can greatly affect both latency and throughput. However, the design of the interconnection network topology is not currently automated. They rely heavily on expert knowledge and fail to deliver optimal performance. To this end, we propose an automated design framework for chiplet interconnection network topology, called CINT-AD. To implement CINT-AD, we first investigate topology-related properties from the perspective of design constraints and structural symmetry. Then, using these properties, we develop an automated framework to generate the topology for interposer interconnections between different chiplets. A deadlock-free routing scheme is proposed for the topologies generated by CINT-AD to fully utilize the resources of the interconnection network. Experimental results show that CINT-AD achieves lower average latency and higher throughput compared to existing state-of-the-art topologies. Furthermore, power and area analysis show that the overhead of CINT-AD is negligible.
基于 Chiplet 的系统将分立芯片集成在中间件上,并利用互连网络实现不同组件之间的通信。互连网络的拓扑结构会极大地影响延迟和吞吐量,因此对整体性能构成重大挑战。然而,互连网络拓扑结构的设计目前尚未实现自动化。它们严重依赖专家知识,无法提供最佳性能。为此,我们提出了一种芯片组互连网络拓扑自动设计框架,称为 CINT-AD。为了实现 CINT-AD,我们首先从设计约束和结构对称的角度研究了拓扑相关特性。然后,利用这些特性,我们开发了一个自动框架,用于生成不同芯片间的互联拓扑。针对 CINT-AD 生成的拓扑结构,我们提出了一种无死锁路由方案,以充分利用互连网络的资源。实验结果表明,与现有的最先进拓扑相比,CINT-AD 实现了更低的平均延迟和更高的吞吐量。此外,功率和面积分析表明,CINT-AD 的开销可以忽略不计。
{"title":"Enhancing interconnection network topology for chiplet-based systems: An automated design framework","authors":"","doi":"10.1016/j.future.2024.107547","DOIUrl":"10.1016/j.future.2024.107547","url":null,"abstract":"<div><div>Chiplet-based systems integrate discrete chips on an interposer and use the interconnection network to enable communication between different components. The topology of the interconnection network poses a significant challenge to overall performance, as it can greatly affect both latency and throughput. However, the design of the interconnection network topology is not currently automated. They rely heavily on expert knowledge and fail to deliver optimal performance. To this end, we propose an automated design framework for chiplet interconnection network topology, called CINT-AD. To implement CINT-AD, we first investigate topology-related properties from the perspective of design constraints and structural symmetry. Then, using these properties, we develop an automated framework to generate the topology for interposer interconnections between different chiplets. A deadlock-free routing scheme is proposed for the topologies generated by CINT-AD to fully utilize the resources of the interconnection network. Experimental results show that CINT-AD achieves lower average latency and higher throughput compared to existing state-of-the-art topologies. Furthermore, power and area analysis show that the overhead of CINT-AD is negligible.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":null,"pages":null},"PeriodicalIF":6.2,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142424309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Future Generation Computer Systems-The International Journal of Escience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1