首页 > 最新文献

Frontiers of Computer Science最新文献

英文 中文
CompactChain: an efficient stateless chain for UTXO-model blockchain CompactChain:适用于UTXO模式区块链的高效无状态链
IF 4.2 3区 计算机科学 Q1 Mathematics Pub Date : 2024-01-22 DOI: 10.1007/s11704-023-2365-9
B. Swaroopa Reddy, T. Uday Kiran Reddy

In this work, we propose a stateless blockchain called CompactChain, which compacts the entire state of the UTXO (Unspent Transaction Output) based blockchain systems into two RSA accumulators. The first accumulator is called Transaction Output (TXO) commitment which represents the TXO set. The second one is called Spent Transaction Output (STXO) commitment which represents the STXO set. In this work, we discuss three algorithms: (i) To update the TXO and STXO commitments by the miner. The miner also provides the proofs for the correctness of the updated commitments; (ii) To prove the transaction’s validity by providing a membership witness in TXO commitment and non-membership witness against STXO commitment for a coin being spent by a user; (iii) To update the witness for the coin that is not yet spent; The experimental results evaluate the performance of the CompactChain in terms of time taken by a miner to update the commitments and time taken by a validator to verify the commitments and validate the transactions. We compare the performance of CompactChain with the existing state-of-the-art works on stateless blockchains. CompactChain shows a reduction in commitments update complexity and transaction witness size which inturn reduces the mempool size and propagation latency without compromising the system throughput (Transactions per second (TPS)).

在这项工作中,我们提出了一种名为 CompactChain 的无状态区块链,它将基于 UTXO(未花费交易输出)的区块链系统的整个状态压缩到两个 RSA 累加器中。第一个累加器称为交易输出(TXO)承诺,代表 TXO 集。第二个累加器称为已耗费交易输出(STXO)承诺,代表 STXO 集。在这项工作中,我们讨论了三种算法:(i) 由矿工更新 TXO 和 STXO 承诺。实验结果评估了 CompactChain 的性能,包括矿工更新承诺所需的时间,以及验证者验证承诺和验证交易所需的时间。我们将CompactChain的性能与现有的无状态区块链最新成果进行了比较。CompactChain降低了承诺更新的复杂性和交易见证的大小,进而减少了内存池的大小和传播延迟,同时不影响系统吞吐量(每秒交易量(TPS))。
{"title":"CompactChain: an efficient stateless chain for UTXO-model blockchain","authors":"B. Swaroopa Reddy, T. Uday Kiran Reddy","doi":"10.1007/s11704-023-2365-9","DOIUrl":"https://doi.org/10.1007/s11704-023-2365-9","url":null,"abstract":"<p>In this work, we propose a stateless blockchain called CompactChain, which compacts the entire <i>state</i> of the UTXO (Unspent Transaction Output) based blockchain systems into two RSA accumulators. The first accumulator is called Transaction Output (TXO) commitment which represents the <i>TXO set.</i> The second one is called Spent Transaction Output (STXO) commitment which represents the <i>STXO set.</i> In this work, we discuss three algorithms: (i) To update the TXO and STXO commitments by the miner. The miner also provides the proofs for the correctness of the updated commitments; (ii) To prove the transaction’s validity by providing a membership witness in TXO commitment and non-membership witness against STXO commitment for a coin being spent by a user; (iii) To update the witness for the coin that is not yet spent; The experimental results evaluate the performance of the CompactChain in terms of time taken by a miner to update the commitments and time taken by a validator to verify the commitments and validate the transactions. We compare the performance of CompactChain with the existing state-of-the-art works on stateless blockchains. CompactChain shows a reduction in commitments update complexity and transaction witness size which inturn reduces the mempool size and propagation latency without compromising the system throughput (Transactions per second (TPS)).</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An anonymous authentication and secure data transmission scheme for the Internet of Things based on blockchain 基于区块链的物联网匿名认证和安全数据传输方案
IF 4.2 3区 计算机科学 Q1 Mathematics Pub Date : 2024-01-22 DOI: 10.1007/s11704-023-2595-x
Xingxing Chen, Qingfeng Cheng, Weidong Yang, Xiangyang Luo

With the widespread use of network infrastructures such as 5G and low-power wide-area networks, a large number of the Internet of Things (IoT) device nodes are connected to the network, generating massive amounts of data. Therefore, it is a great challenge to achieve anonymous authentication of IoT nodes and secure data transmission. At present, blockchain technology is widely used in authentication and s data storage due to its decentralization and immutability. Recently, Fan et al. proposed a secure and efficient blockchain-based IoT authentication and data sharing scheme. We studied it as one of the state-of-the-art protocols and found that this scheme does not consider the resistance to ephemeral secret compromise attacks and the anonymity of IoT nodes. To overcome these security flaws, this paper proposes an enhanced authentication and data transmission scheme, which is verified by formal security proofs and informal security analysis. Furthermore, Scyther is applied to prove the security of the proposed scheme. Moreover, it is demonstrated that the proposed scheme achieves better performance in terms of communication and computational cost compared to other related schemes.

随着 5G 和低功耗广域网等网络基础设施的广泛应用,大量物联网(IoT)设备节点连接到网络,产生海量数据。因此,如何实现物联网节点的匿名认证和安全数据传输是一个巨大的挑战。目前,区块链技术因其去中心化和不可篡改性被广泛应用于身份验证和数据存储。最近,Fan 等人提出了一种安全高效的基于区块链的物联网身份验证和数据共享方案。我们将其作为最先进的协议之一进行了研究,发现该方案没有考虑抵御短暂秘密泄露攻击和物联网节点的匿名性。为了克服这些安全缺陷,本文提出了一种增强的身份验证和数据传输方案,并通过正式安全证明和非正式安全分析进行了验证。此外,还应用了 Scyther 来证明所提方案的安全性。此外,本文还证明,与其他相关方案相比,所提出的方案在通信和计算成本方面具有更好的性能。
{"title":"An anonymous authentication and secure data transmission scheme for the Internet of Things based on blockchain","authors":"Xingxing Chen, Qingfeng Cheng, Weidong Yang, Xiangyang Luo","doi":"10.1007/s11704-023-2595-x","DOIUrl":"https://doi.org/10.1007/s11704-023-2595-x","url":null,"abstract":"<p>With the widespread use of network infrastructures such as 5G and low-power wide-area networks, a large number of the Internet of Things (IoT) device nodes are connected to the network, generating massive amounts of data. Therefore, it is a great challenge to achieve anonymous authentication of IoT nodes and secure data transmission. At present, blockchain technology is widely used in authentication and s data storage due to its decentralization and immutability. Recently, Fan et al. proposed a secure and efficient blockchain-based IoT authentication and data sharing scheme. We studied it as one of the state-of-the-art protocols and found that this scheme does not consider the resistance to ephemeral secret compromise attacks and the anonymity of IoT nodes. To overcome these security flaws, this paper proposes an enhanced authentication and data transmission scheme, which is verified by formal security proofs and informal security analysis. Furthermore, Scyther is applied to prove the security of the proposed scheme. Moreover, it is demonstrated that the proposed scheme achieves better performance in terms of communication and computational cost compared to other related schemes.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedDAA: a robust federated learning framework to protect privacy and defend against adversarial attack FedDAA:保护隐私和抵御对抗性攻击的强大联合学习框架
IF 4.2 3区 计算机科学 Q1 Mathematics Pub Date : 2024-01-22 DOI: 10.1007/s11704-023-2283-x
Shiwei Lu, Ruihu Li, Wenbin Liu

Federated learning (FL) has emerged to break data-silo and protect clients’ privacy in the field of artificial intelligence. However, deep leakage from gradient (DLG) attack can fully reconstruct clients’ data from the submitted gradient, which threatens the fundamental privacy of FL. Although cryptology and differential privacy prevent privacy leakage from gradient, they bring negative effect on communication overhead or model performance. Moreover, the original distribution of local gradient has been changed in these schemes, which makes it difficult to defend against adversarial attack. In this paper, we propose a novel federated learning framework with model decomposition, aggregation and assembling (FedDAA), along with a training algorithm, to train federated model, where local gradient is decomposed into multiple blocks and sent to different proxy servers to complete aggregation. To bring better privacy protection performance to FedDAA, an indicator is designed based on image structural similarity to measure privacy leakage under DLG attack and an optimization method is given to protect privacy with the least proxy servers. In addition, we give defense schemes against adversarial attack in FedDAA and design an algorithm to verify the correctness of aggregated results. Experimental results demonstrate that FedDAA can reduce the structural similarity between the reconstructed image and the original image to 0.014 and remain model convergence accuracy as 0.952, thus having the best privacy protection performance and model training effect. More importantly, defense schemes against adversarial attack are compatible with privacy protection in FedDAA and the defense effects are not weaker than those in the traditional FL. Moreover, verification algorithm of aggregation results brings about negligible overhead to FedDAA.

联合学习(FL)的出现打破了数据孤岛,保护了人工智能领域客户的隐私。然而,梯度深度泄漏(DLG)攻击可以从提交的梯度中完全重建客户数据,这威胁到了 FL 的基本隐私。虽然密码学和差分隐私可以防止梯度隐私泄露,但它们会对通信开销或模型性能产生负面影响。此外,在这些方案中,局部梯度的原始分布被改变了,因此很难抵御对抗性攻击。在本文中,我们提出了一种新颖的联合学习框架--模型分解、聚合和组装(FedDAA)以及一种训练算法,用于训练联合模型,其中局部梯度被分解成多个区块,并发送到不同的代理服务器完成聚合。为了给 FedDAA 带来更好的隐私保护性能,我们设计了一种基于图像结构相似性的指标来衡量 DLG 攻击下的隐私泄露情况,并给出了一种优化方法,以最少的代理服务器来保护隐私。此外,我们还给出了 FedDAA 中对抗性攻击的防御方案,并设计了一种算法来验证汇总结果的正确性。实验结果表明,FedDAA 能将重建图像与原始图像的结构相似度降低到 0.014,模型收敛精度保持在 0.952,因此具有最佳的隐私保护性能和模型训练效果。更重要的是,对抗性攻击的防御方案与 FedDAA 的隐私保护兼容,防御效果不弱于传统 FL。此外,聚合结果的验证算法给 FedDAA 带来的开销可以忽略不计。
{"title":"FedDAA: a robust federated learning framework to protect privacy and defend against adversarial attack","authors":"Shiwei Lu, Ruihu Li, Wenbin Liu","doi":"10.1007/s11704-023-2283-x","DOIUrl":"https://doi.org/10.1007/s11704-023-2283-x","url":null,"abstract":"<p>Federated learning (FL) has emerged to break data-silo and protect clients’ privacy in the field of artificial intelligence. However, deep leakage from gradient (DLG) attack can fully reconstruct clients’ data from the submitted gradient, which threatens the fundamental privacy of FL. Although cryptology and differential privacy prevent privacy leakage from gradient, they bring negative effect on communication overhead or model performance. Moreover, the original distribution of local gradient has been changed in these schemes, which makes it difficult to defend against adversarial attack. In this paper, we propose a novel federated learning framework with model decomposition, aggregation and assembling (FedDAA), along with a training algorithm, to train federated model, where local gradient is decomposed into multiple blocks and sent to different proxy servers to complete aggregation. To bring better privacy protection performance to FedDAA, an indicator is designed based on image structural similarity to measure privacy leakage under DLG attack and an optimization method is given to protect privacy with the least proxy servers. In addition, we give defense schemes against adversarial attack in FedDAA and design an algorithm to verify the correctness of aggregated results. Experimental results demonstrate that FedDAA can reduce the structural similarity between the reconstructed image and the original image to 0.014 and remain model convergence accuracy as 0.952, thus having the best privacy protection performance and model training effect. More importantly, defense schemes against adversarial attack are compatible with privacy protection in FedDAA and the defense effects are not weaker than those in the traditional FL. Moreover, verification algorithm of aggregation results brings about negligible overhead to FedDAA.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139560062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EvolveKG: a general framework to learn evolving knowledge graphs EvolveKG:学习演化知识图谱的通用框架
IF 4.2 3区 计算机科学 Q1 Mathematics Pub Date : 2024-01-22 DOI: 10.1007/s11704-022-2467-9
Jiaqi Liu, Zhiwen Yu, Bin Guo, Cheng Deng, Luoyi Fu, Xinbing Wang, Chenghu Zhou

A great many practical applications have observed knowledge evolution, i.e., continuous born of new knowledge, with its formation influenced by the structure of historical knowledge. This observation gives rise to evolving knowledge graphs whose structure temporally grows over time. However, both the modal characterization and the algorithmic implementation of evolving knowledge graphs remain unexplored. To this end, we propose EvolveKG–a general framework that enables algorithms in the static knowledge graphs to learn the evolving ones. EvolveKG quantifies the influence of a historical fact on a current one, called the effectiveness of the fact, and makes knowledge prediction by leveraging all the cross-time knowledge interaction. The novelty of EvolveKG lies in Derivative Graph–a weighted snapshot of evolution at a certain time. Particularly, each weight quantifies knowledge effectiveness through a temporarily decaying function of consistency and attenuation, two proposed factors depicting whether or not the effectiveness of a fact fades away with time. Besides, considering both knowledge creation and loss, we obtain higher prediction accuracy when the effectiveness of all the facts increases with time or remains unchanged. Under four real datasets, the superiority of EvolveKG is confirmed in prediction accuracy.

许多实际应用都观察到了知识的演变,即新知识的不断诞生,其形成受到历史知识结构的影响。这种观察结果催生了知识图谱的演化,其结构随着时间的推移而不断变化。然而,演化知识图谱的模态特征描述和算法实现仍有待探索。为此,我们提出了 EvolveKG--一个通用框架,使静态知识图谱中的算法能够学习不断演化的知识图谱。EvolveKG 量化了历史事实对当前事实的影响(称为事实的有效性),并利用所有跨时间知识交互进行知识预测。EvolveKG 的新颖之处在于 "衍生图"(Derivative Graph)--一个特定时间的加权演化快照。特别是,每个权重通过一致性和衰减的暂时衰减函数来量化知识的有效性。此外,考虑到知识的创造和流失,当所有事实的有效性随时间增加或保持不变时,我们会获得更高的预测准确率。在四个真实数据集下,EvolveKG 在预测准确率方面的优势得到了证实。
{"title":"EvolveKG: a general framework to learn evolving knowledge graphs","authors":"Jiaqi Liu, Zhiwen Yu, Bin Guo, Cheng Deng, Luoyi Fu, Xinbing Wang, Chenghu Zhou","doi":"10.1007/s11704-022-2467-9","DOIUrl":"https://doi.org/10.1007/s11704-022-2467-9","url":null,"abstract":"<p>A great many practical applications have observed knowledge evolution, i.e., continuous born of new knowledge, with its formation influenced by the structure of historical knowledge. This observation gives rise to evolving knowledge graphs whose structure temporally grows over time. However, both the modal characterization and the algorithmic implementation of evolving knowledge graphs remain unexplored. To this end, we propose EvolveKG–a general framework that enables algorithms in the static knowledge graphs to learn the evolving ones. EvolveKG quantifies the influence of a historical fact on a current one, called the effectiveness of the fact, and makes knowledge prediction by leveraging all the cross-time knowledge interaction. The novelty of EvolveKG lies in Derivative Graph–a weighted snapshot of evolution at a certain time. Particularly, each weight quantifies knowledge effectiveness through a temporarily decaying function of consistency and attenuation, two proposed factors depicting whether or not the effectiveness of a fact fades away with time. Besides, considering both knowledge creation and loss, we obtain higher prediction accuracy when the effectiveness of all the facts increases with time or remains unchanged. Under four real datasets, the superiority of EvolveKG is confirmed in prediction accuracy.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spreadsheet quality assurance: a literature review 电子表格质量保证:文献综述
IF 4.2 3区 计算机科学 Q1 Mathematics Pub Date : 2024-01-22 DOI: 10.1007/s11704-023-2384-6
Pak-Lok Poon, Man Fai Lau, Yuen Tak Yu, Sau-Fun Tang

Spreadsheets are very common for information processing to support decision making by both professional developers and non-technical end users. Moreover, business intelligence and artificial intelligence are increasingly popular in the industry nowadays, where spreadsheets have been used as, or integrated into, intelligent or expert systems in various application domains. However, it has been repeatedly reported that faults often exist in operational spreadsheets, which could severely compromise the quality of conclusions and decisions based on the spreadsheets. With a view to systematically examining this problem via survey of existing work, we have conducted a comprehensive literature review on the quality issues and related techniques of spreadsheets over a 35.5-year period (from January 1987 to June 2022) for target journals and a 10.5-year period (from January 2012 to June 2022) for target conferences. Among other findings, two major ones are: (a) Spreadsheet quality is best addressed throughout the whole spreadsheet life cycle, rather than just focusing on a few specific stages of the life cycle. (b) Relatively more studies focus on spreadsheet testing and debugging (related to fault detection and removal) when compared with spreadsheet specification, modeling, and design (related to development). As prevention is better than cure, more research should be performed on the early stages of the spreadsheet life cycle. Enlightened by our comprehensive review, we have identified the major research gaps as well as highlighted key research directions for future work in the area.

无论是专业开发人员还是非技术性终端用户,电子表格都是非常常见的信息处理工具,用于支持决策制定。此外,如今商业智能和人工智能在业界越来越流行,电子表格已被用作或集成到各种应用领域的智能系统或专家系统中。然而,据反复报道,操作电子表格中经常存在故障,这会严重影响基于电子表格的结论和决策的质量。为了通过对现有工作的调查系统地研究这一问题,我们对电子表格的质量问题和相关技术进行了全面的文献综述,其中目标期刊的综述时间跨度为 35.5 年(从 1987 年 1 月到 2022 年 6 月),目标会议的综述时间跨度为 10.5 年(从 2012 年 1 月到 2022 年 6 月)。在其他研究结果中,有两个主要发现(a) 电子表格质量问题最好在整个电子表格生命周期内解决,而不是只关注生命周期的几个特定阶段。(b) 相对于电子表格规范、建模和设计(与开发相关),更多研究关注电子表格测试和调试(与故障检测和排除相关)。预防胜于治疗,因此应在电子表格生命周期的早期阶段开展更多研究。通过全面回顾,我们发现了主要的研究空白,并强调了该领域未来工作的主要研究方向。
{"title":"Spreadsheet quality assurance: a literature review","authors":"Pak-Lok Poon, Man Fai Lau, Yuen Tak Yu, Sau-Fun Tang","doi":"10.1007/s11704-023-2384-6","DOIUrl":"https://doi.org/10.1007/s11704-023-2384-6","url":null,"abstract":"<p>Spreadsheets are very common for information processing to support decision making by both professional developers and non-technical end users. Moreover, business intelligence and artificial intelligence are increasingly popular in the industry nowadays, where spreadsheets have been used as, or integrated into, intelligent or expert systems in various application domains. However, it has been repeatedly reported that faults often exist in operational spreadsheets, which could severely compromise the quality of conclusions and decisions based on the spreadsheets. With a view to systematically examining this problem via survey of existing work, we have conducted a comprehensive literature review on the quality issues and related techniques of spreadsheets over a 35.5-year period (from January 1987 to June 2022) for target journals and a 10.5-year period (from January 2012 to June 2022) for target conferences. Among other findings, two major ones are: (a) Spreadsheet quality is best addressed throughout the whole spreadsheet life cycle, rather than just focusing on a few specific stages of the life cycle. (b) Relatively more studies focus on spreadsheet testing and debugging (related to fault detection and removal) when compared with spreadsheet specification, modeling, and design (related to development). As prevention is better than cure, more research should be performed on the early stages of the spreadsheet life cycle. Enlightened by our comprehensive review, we have identified the major research gaps as well as highlighted key research directions for future work in the area.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoupled deep hough voting for point cloud registration 用于点云注册的解耦深度霍夫表决
IF 4.2 3区 计算机科学 Q1 Mathematics Pub Date : 2024-01-22 DOI: 10.1007/s11704-023-2471-8
Mingzhi Yuan, Kexue Fu, Zhihao Li, Manning Wang

Estimating rigid transformation using noisy correspondences is critical to feature-based point cloud registration. Recently, a series of studies have attempted to combine traditional robust model fitting with deep learning. Among them, DHVR proposed a hough voting-based method, achieving new state-of-the-art performance. However, we find voting on rotation and translation simultaneously hinders achieving better performance. Therefore, we proposed a new hough voting-based method, which decouples rotation and translation space. Specifically, we first utilize hough voting and a neural network to estimate rotation. Then based on good initialization on rotation, we can easily obtain accurate rigid transformation. Extensive experiments on 3DMatch and 3DLoMatch datasets show that our method achieves comparable performances over the state-of-the-art methods. We further demonstrate the generalization of our method by experimenting on KITTI dataset.

利用噪声对应关系估计刚性变换对于基于特征的点云配准至关重要。最近,一系列研究尝试将传统的鲁棒模型拟合与深度学习相结合。其中,DHVR 提出了一种基于霍夫投票的方法,取得了新的先进性能。然而,我们发现同时对旋转和平移进行投票会阻碍取得更好的性能。因此,我们提出了一种新的基于 hough 投票的方法,将旋转和平移空间分离开来。具体来说,我们首先利用 Hough 投票和神经网络来估计旋转。然后,基于良好的旋转初始化,我们可以轻松获得精确的刚性变换。在 3DMatch 和 3DLoMatch 数据集上进行的大量实验表明,我们的方法取得了与最先进方法相当的性能。通过在 KITTI 数据集上的实验,我们进一步证明了我们方法的通用性。
{"title":"Decoupled deep hough voting for point cloud registration","authors":"Mingzhi Yuan, Kexue Fu, Zhihao Li, Manning Wang","doi":"10.1007/s11704-023-2471-8","DOIUrl":"https://doi.org/10.1007/s11704-023-2471-8","url":null,"abstract":"<p>Estimating rigid transformation using noisy correspondences is critical to feature-based point cloud registration. Recently, a series of studies have attempted to combine traditional robust model fitting with deep learning. Among them, DHVR proposed a hough voting-based method, achieving new state-of-the-art performance. However, we find voting on rotation and translation simultaneously hinders achieving better performance. Therefore, we proposed a new hough voting-based method, which decouples rotation and translation space. Specifically, we first utilize hough voting and a neural network to estimate rotation. Then based on good initialization on rotation, we can easily obtain accurate rigid transformation. Extensive experiments on 3DMatch and 3DLoMatch datasets show that our method achieves comparable performances over the state-of-the-art methods. We further demonstrate the generalization of our method by experimenting on KITTI dataset.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Group control for procedural rules: parameterized complexity and consecutive domains 程序规则的群体控制:参数化复杂性和连续域
IF 4.2 3区 计算机科学 Q1 Mathematics Pub Date : 2024-01-22 DOI: 10.1007/s11704-023-2700-1
Yongjie Yang, Dinko Dimitrov

We consider GROUP CONTROL BY ADDING INDIVIDUALS (GCAI) in the setting of group identification for two procedural rules—the consensus-start-respecting rule and the liberal-start-respecting rule. It is known that GCAI for both rules are NP-hard, but whether they are fixed-parameter tractable with respect to the number of distinguished individuals remained open. We resolve both open problems in the affirmative. In addition, we strengthen the NP-hardness of GCAI by showing that, with respect to the natural parameter the number of added individuals, GCAI for both rules are W[2]-hard. Notably, the W[2]-hardness for the liberal-start-respecting rule holds even when restricted to a very special case where the qualifications of individuals satisfy the so-called consecutive ones property. However, for the consensus-start-respecting rule, the problem becomes polynomial-time solvable in this special case. We also study a dual restriction where the disqualifications of individuals fulfill the consecutive ones property, and show that under this restriction GCAI for both rules turn out to be polynomial-time solvable. Our reductions for showing W[2]-hardness also imply several algorithmic lower bounds.

我们研究了两种程序规则--尊重起始数的共识规则和尊重起始数的自由规则--下的群体识别中的 "增加个体的群体控制(GCAI)"问题。众所周知,这两种规则的 GCAI 都是 NP 难的,但它们在区分个体的数量上是否具有固定参数的可操作性却一直是个未知数。我们以肯定的态度解决了这两个悬而未决的问题。此外,我们还证明了关于自然参数 "新增个体数",这两种规则的 GCAI 都是 W[2]-hard 的,从而加强了 GCAI 的 NP-hardness。值得注意的是,即使局限于个体资格满足所谓的连续个体属性这一非常特殊的情况,自由-尊重星规则的 W[2]-hardness 也是成立的。然而,对于尊重共识的规则来说,在这种特殊情况下问题变得多项式时间可解。我们还研究了一种对偶限制,即个体的不合格条件满足连续一属性,并证明在这种限制下,两种规则的 GCAI 都是多项式时间可解的。我们为证明 W[2]-hardness 所做的还原还隐含了几个算法下限。
{"title":"Group control for procedural rules: parameterized complexity and consecutive domains","authors":"Yongjie Yang, Dinko Dimitrov","doi":"10.1007/s11704-023-2700-1","DOIUrl":"https://doi.org/10.1007/s11704-023-2700-1","url":null,"abstract":"<p>We consider GROUP CONTROL BY ADDING INDIVIDUALS (GCAI) in the setting of group identification for two procedural rules—the consensus-start-respecting rule and the liberal-start-respecting rule. It is known that GCAI for both rules are NP-hard, but whether they are fixed-parameter tractable with respect to the number of distinguished individuals remained open. We resolve both open problems in the affirmative. In addition, we strengthen the NP-hardness of GCAI by showing that, with respect to the natural parameter the number of added individuals, GCAI for both rules are W[2]-hard. Notably, the W[2]-hardness for the liberal-start-respecting rule holds even when restricted to a very special case where the qualifications of individuals satisfy the so-called consecutive ones property. However, for the consensus-start-respecting rule, the problem becomes polynomial-time solvable in this special case. We also study a dual restriction where the disqualifications of individuals fulfill the consecutive ones property, and show that under this restriction GCAI for both rules turn out to be polynomial-time solvable. Our reductions for showing W[2]-hardness also imply several algorithmic lower bounds.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139560068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hybrid memory architecture supporting fine-grained data migration 支持细粒度数据迁移的混合内存架构
IF 4.2 3区 计算机科学 Q1 Mathematics Pub Date : 2024-01-22 DOI: 10.1007/s11704-023-2675-y
Ye Chi, Jianhui Yue, Xiaofei Liao, Haikun Liu, Hai Jin

Hybrid memory systems composed of dynamic random access memory (DRAM) and Non-volatile memory (NVM) often exploit page migration technologies to fully take the advantages of different memory media. Most previous proposals usually migrate data at a granularity of 4 KB pages, and thus waste memory bandwidth and DRAM resource. In this paper, we propose Mocha, a non-hierarchical architecture that organizes DRAM and NVM in a flat address space physically, but manages them in a cache/memory hierarchy. Since the commercial NVM device-Intel Optane DC Persistent Memory Modules (DCPMM) actually access the physical media at a granularity of 256 bytes (an Optane block), we manage the DRAM cache at the 256-byte size to adapt to this feature of Optane. This design not only enables fine-grained data migration and management for the DRAM cache, but also avoids write amplification for Intel Optane DCPMM. We also create an Indirect Address Cache (IAC) in Hybrid Memory Controller (HMC) and propose a reverse address mapping table in the DRAM to speed up address translation and cache replacement. Moreover, we exploit a utility-based caching mechanism to filter cold blocks in the NVM, and further improve the efficiency of the DRAM cache. We implement Mocha in an architectural simulator. Experimental results show that Mocha can improve application performance by 8.2% on average (up to 24.6%), reduce 6.9% energy consumption and 25.9% data migration traffic on average, compared with a typical hybrid memory architecture–HSCC.

由动态随机存取存储器(DRAM)和非易失性存储器(NVM)组成的混合存储器系统通常利用页面迁移技术来充分利用不同存储器介质的优势。以前的大多数建议通常以 4 KB 页面的粒度迁移数据,因此浪费了内存带宽和 DRAM 资源。在本文中,我们提出了一种非分层架构--Mocha,它将 DRAM 和 NVM 组织在一个扁平的物理地址空间中,但将它们管理在一个高速缓存/内存分层中。由于商用 NVM 设备--英特尔 Optane DC 持久内存模块(DCPMM)实际上以 256 字节(一个 Optane 块)的粒度访问物理介质,因此我们以 256 字节的大小管理 DRAM 缓存,以适应 Optane 的这一特性。这种设计不仅实现了 DRAM 缓存的细粒度数据迁移和管理,还避免了英特尔 Optane DCPMM 的写放大。我们还在混合内存控制器(HMC)中创建了间接地址缓存(IAC),并在 DRAM 中提出了反向地址映射表,以加快地址转换和缓存替换。此外,我们还利用基于实用程序的缓存机制来过滤 NVM 中的冷块,并进一步提高 DRAM 缓存的效率。我们在架构模拟器中实现了 Mocha。实验结果表明,与典型的混合内存架构--HSCC 相比,Mocha 可以将应用性能平均提高 8.2%(最高可达 24.6%),平均降低 6.9% 的能耗和 25.9% 的数据迁移流量。
{"title":"A hybrid memory architecture supporting fine-grained data migration","authors":"Ye Chi, Jianhui Yue, Xiaofei Liao, Haikun Liu, Hai Jin","doi":"10.1007/s11704-023-2675-y","DOIUrl":"https://doi.org/10.1007/s11704-023-2675-y","url":null,"abstract":"<p>Hybrid memory systems composed of dynamic random access memory (DRAM) and Non-volatile memory (NVM) often exploit page migration technologies to fully take the advantages of different memory media. Most previous proposals usually migrate data at a granularity of 4 KB pages, and thus waste memory bandwidth and DRAM resource. In this paper, we propose Mocha, a non-hierarchical architecture that organizes DRAM and NVM in a flat address space physically, but manages them in a cache/memory hierarchy. Since the commercial NVM device-Intel Optane DC Persistent Memory Modules (DCPMM) actually access the physical media at a granularity of 256 bytes (an Optane block), we manage the DRAM cache at the 256-byte size to adapt to this feature of Optane. This design not only enables fine-grained data migration and management for the DRAM cache, but also avoids write amplification for Intel Optane DCPMM. We also create an <i>Indirect Address Cache</i> (IAC) in <i>Hybrid Memory Controller</i> (HMC) and propose a reverse address mapping table in the DRAM to speed up address translation and cache replacement. Moreover, we exploit a utility-based caching mechanism to filter cold blocks in the NVM, and further improve the efficiency of the DRAM cache. We implement Mocha in an architectural simulator. Experimental results show that Mocha can improve application performance by 8.2% on average (up to 24.6%), reduce 6.9% energy consumption and 25.9% data migration traffic on average, compared with a typical hybrid memory architecture–HSCC.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Protein acetylation sites with complex-valued polynomial model 采用复值多项式模型的蛋白质乙酰化位点
IF 4.2 3区 计算机科学 Q1 Mathematics Pub Date : 2024-01-22 DOI: 10.1007/s11704-023-2640-9
Wenzheng Bao, Bin Yang

Protein acetylation refers to a process of adding acetyl groups (CH3CO-) to lysine residues on protein chains. As one of the most commonly used protein post-translational modifications, lysine acetylation plays an important role in different organisms. In our study, we developed a human-specific method which uses a cascade classifier of complex-valued polynomial model (CVPM), combined with sequence and structural feature descriptors to solve the problem of imbalance between positive and negative samples. Complex-valued gene expression programming and differential evolution are utilized to search the optimal CVPM model. We also made a systematic and comprehensive analysis of the acetylation data and the prediction results. The performances of our proposed method aie 79.15% in Sp, 78.17% in Sn, 78.66% in ACC 78.76% in F1, and 0.5733 in MCC, which performs better than other state-of-the-art methods.

蛋白质乙酰化是指在蛋白质链上的赖氨酸残基上添加乙酰基(CH3CO-)的过程。作为最常用的蛋白质翻译后修饰之一,赖氨酸乙酰化在不同生物体中发挥着重要作用。在我们的研究中,我们开发了一种针对人类的方法,该方法使用复值多项式模型(CVPM)级联分类器,结合序列和结构特征描述符来解决阳性样本和阴性样本之间的不平衡问题。我们利用复值基因表达编程和差分进化来搜索最优的 CVPM 模型。我们还对乙酰化数据和预测结果进行了系统全面的分析。我们提出的方法在 Sp、Sn、ACC 和 MCC 中的表现分别为 79.15%、78.17%、78.66%、78.76% 和 0.5733,优于其他最先进的方法。
{"title":"Protein acetylation sites with complex-valued polynomial model","authors":"Wenzheng Bao, Bin Yang","doi":"10.1007/s11704-023-2640-9","DOIUrl":"https://doi.org/10.1007/s11704-023-2640-9","url":null,"abstract":"<p>Protein acetylation refers to a process of adding acetyl groups (CH3CO-) to lysine residues on protein chains. As one of the most commonly used protein post-translational modifications, lysine acetylation plays an important role in different organisms. In our study, we developed a human-specific method which uses a cascade classifier of complex-valued polynomial model (CVPM), combined with sequence and structural feature descriptors to solve the problem of imbalance between positive and negative samples. Complex-valued gene expression programming and differential evolution are utilized to search the optimal CVPM model. We also made a systematic and comprehensive analysis of the acetylation data and the prediction results. The performances of our proposed method aie 79.15% in <i>S</i><sub><i>p</i></sub>, 78.17% in <i>S</i><sub><i>n</i></sub>, 78.66% in <i>ACC</i> 78.76% in <i>F</i>1, and 0.5733 in <i>MCC</i>, which performs better than other state-of-the-art methods.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139560065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerating BERT inference with GPU-efficient exit prediction 利用 GPU 高效出口预测加速 BERT 推断
IF 4.2 3区 计算机科学 Q1 Mathematics Pub Date : 2024-01-22 DOI: 10.1007/s11704-022-2341-9

Abstract

BERT is a representative pre-trained language model that has drawn extensive attention for significant improvements in downstream Natural Language Processing (NLP) tasks. The complex architecture and massive parameters bring BERT competitive performance but also result in slow speed at model inference time. To speed up BERT inference, FastBERT realizes adaptive inference with an acceptable drop in accuracy based on knowledge distillation and the early-exit technique. However, many factors may limit the performance of FastBERT, such as the teacher classifier that is not knowledgeable enough, the batch size shrinkage and the redundant computation of student classifiers. To overcome these limitations, we propose a new BERT inference method with GPU-Efficient Exit Prediction (GEEP). GEEP leverages the shared exit loss to simplify the training process of FastBERT from two steps into only one step and makes the teacher classifier more knowledgeable by feeding diverse Transformer outputs to the teacher classifier. In addition, the exit layer prediction technique is proposed to utilize a GPU hash table to handle the token-level exit layer distribution and to sort test samples by predicted exit layers. In this way, GEEP can avoid batch size shrinkage and redundant computation of student classifiers. Experimental results on twelve public English and Chinese NLP datasets prove the effectiveness of the proposed approach. The source codes of GEEP will be released to the public upon paper acceptance.

摘要 BERT 是一种具有代表性的预训练语言模型,因其在下游自然语言处理(NLP)任务中的显著改进而受到广泛关注。复杂的架构和庞大的参数为 BERT 带来了极具竞争力的性能,但同时也导致模型推理速度缓慢。为了加快 BERT 的推理速度,FastBERT 基于知识提炼和早期退出技术实现了自适应推理,同时降低了可接受的准确度。然而,许多因素可能会限制 FastBERT 的性能,例如教师分类器的知识不够丰富、批量规模缩减以及学生分类器的冗余计算。为了克服这些限制,我们提出了一种采用 GPU 高效出口预测(GEEP)的新 BERT 推断方法。GEEP 利用共享出口损失,将 FastBERT 的训练过程从两步简化为一步,并通过向教师分类器提供不同的 Transformer 输出,使教师分类器的知识更丰富。此外,还提出了出口层预测技术,利用 GPU 哈希表来处理令牌级出口层分布,并根据预测的出口层对测试样本进行排序。通过这种方法,GEEP 可以避免批量缩减和学生分类器的冗余计算。在 12 个公开的中英文 NLP 数据集上的实验结果证明了所提方法的有效性。GEEP 的源代码将在论文被接受后公开发布。
{"title":"Accelerating BERT inference with GPU-efficient exit prediction","authors":"","doi":"10.1007/s11704-022-2341-9","DOIUrl":"https://doi.org/10.1007/s11704-022-2341-9","url":null,"abstract":"<h3>Abstract</h3> <p>BERT is a representative pre-trained language model that has drawn extensive attention for significant improvements in downstream Natural Language Processing (NLP) tasks. The complex architecture and massive parameters bring BERT competitive performance but also result in slow speed at model inference time. To speed up BERT inference, FastBERT realizes adaptive inference with an acceptable drop in accuracy based on knowledge distillation and the early-exit technique. However, many factors may limit the performance of FastBERT, such as the teacher classifier that is not knowledgeable enough, the batch size shrinkage and the redundant computation of student classifiers. To overcome these limitations, we propose a new BERT inference method with GPU-Efficient Exit Prediction (GEEP). GEEP leverages the <em>shared exit loss</em> to simplify the training process of FastBERT from two steps into only one step and makes the teacher classifier more knowledgeable by feeding diverse Transformer outputs to the teacher classifier. In addition, the <em>exit layer prediction</em> technique is proposed to utilize a GPU hash table to handle the token-level exit layer distribution and to sort test samples by predicted exit layers. In this way, GEEP can avoid batch size shrinkage and redundant computation of student classifiers. Experimental results on twelve public English and Chinese NLP datasets prove the effectiveness of the proposed approach. The source codes of GEEP will be released to the public upon paper acceptance.</p>","PeriodicalId":12640,"journal":{"name":"Frontiers of Computer Science","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139560064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers of Computer Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1