Blockchain is a growing decentralized system built for transparency and immutability. There have been several major attacks on blockchain-based systems, leaving a gap in the trustability of this system. This article presents a comprehensive study of 23 attacks on blockchain systems and categorizes them using a layer-based approach. This approach provides an in-depth analysis of the feasibility and motivation of these attacks. In addition, a framework is proposed that enables a systematic analysis of the impact and interconnection of these attacks, thereby providing a means of identifying potential attack vectors and designing appropriate countermeasures to strengthen any blockchain system.
{"title":"Analysing Attacks on Blockchain Systems in a Layer-based Approach","authors":"Joydip Das, Syed Ashraf Al Tasin, Md. Forhad Rabbi, Md Sadek Ferdous","doi":"arxiv-2409.10109","DOIUrl":"https://doi.org/arxiv-2409.10109","url":null,"abstract":"Blockchain is a growing decentralized system built for transparency and\u0000immutability. There have been several major attacks on blockchain-based\u0000systems, leaving a gap in the trustability of this system. This article\u0000presents a comprehensive study of 23 attacks on blockchain systems and\u0000categorizes them using a layer-based approach. This approach provides an\u0000in-depth analysis of the feasibility and motivation of these attacks. In\u0000addition, a framework is proposed that enables a systematic analysis of the\u0000impact and interconnection of these attacks, thereby providing a means of\u0000identifying potential attack vectors and designing appropriate countermeasures\u0000to strengthen any blockchain system.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zack GoldblumUniversity of Pennsylvania, Zhongchuan XuUniversity of Pennsylvania, Haoer ShiUniversity of Pennsylvania, Patryk OrzechowskiUniversity of PennsylvaniaAGH University of Krakow, Jamaal SpenceUniversity of Pennsylvania, Kathryn A DavisUniversity of Pennsylvania, Brian LittUniversity of Pennsylvania, Nishant SinhaUniversity of Pennsylvania, Joost WagenaarUniversity of Pennsylvania
The exponential growth of neuroscientific data necessitates platforms that facilitate data management and multidisciplinary collaboration. In this paper, we introduce Pennsieve - an open-source, cloud-based scientific data management platform built to meet these needs. Pennsieve supports complex multimodal datasets and provides tools for data visualization and analyses. It takes a comprehensive approach to data integration, enabling researchers to define custom metadata schemas and utilize advanced tools to filter and query their data. Pennsieve's modular architecture allows external applications to extend its capabilities, and collaborative workspaces with peer-reviewed data publishing mechanisms promote high-quality datasets optimized for downstream analysis, both in the cloud and on-premises. Pennsieve forms the core for major neuroscience research programs including the NIH SPARC Initiative, NIH HEAL Initiative's PRECISION Human Pain Network, and NIH HEAL RE-JOIN Initiative. It serves more than 80 research groups worldwide, along with several large-scale, inter-institutional projects at clinical sites through the University of Pennsylvania. Underpinning the SPARC.Science, Epilepsy.Science, and Pennsieve Discover portals, Pennsieve stores over 125 TB of scientific data, with 35 TB of data publicly available across more than 350 high-impact datasets. It adheres to the findable, accessible, interoperable, and reusable (FAIR) principles of data sharing and is recognized as one of the NIH-approved Data Repositories. By facilitating scientific data management, discovery, and analysis, Pennsieve fosters a robust and collaborative research ecosystem for neuroscience and beyond.
{"title":"Pennsieve - A Collaborative Platform for Translational Neuroscience and Beyond","authors":"Zack GoldblumUniversity of Pennsylvania, Zhongchuan XuUniversity of Pennsylvania, Haoer ShiUniversity of Pennsylvania, Patryk OrzechowskiUniversity of PennsylvaniaAGH University of Krakow, Jamaal SpenceUniversity of Pennsylvania, Kathryn A DavisUniversity of Pennsylvania, Brian LittUniversity of Pennsylvania, Nishant SinhaUniversity of Pennsylvania, Joost WagenaarUniversity of Pennsylvania","doi":"arxiv-2409.10509","DOIUrl":"https://doi.org/arxiv-2409.10509","url":null,"abstract":"The exponential growth of neuroscientific data necessitates platforms that\u0000facilitate data management and multidisciplinary collaboration. In this paper,\u0000we introduce Pennsieve - an open-source, cloud-based scientific data management\u0000platform built to meet these needs. Pennsieve supports complex multimodal\u0000datasets and provides tools for data visualization and analyses. It takes a\u0000comprehensive approach to data integration, enabling researchers to define\u0000custom metadata schemas and utilize advanced tools to filter and query their\u0000data. Pennsieve's modular architecture allows external applications to extend\u0000its capabilities, and collaborative workspaces with peer-reviewed data\u0000publishing mechanisms promote high-quality datasets optimized for downstream\u0000analysis, both in the cloud and on-premises. Pennsieve forms the core for major neuroscience research programs including\u0000the NIH SPARC Initiative, NIH HEAL Initiative's PRECISION Human Pain Network,\u0000and NIH HEAL RE-JOIN Initiative. It serves more than 80 research groups\u0000worldwide, along with several large-scale, inter-institutional projects at\u0000clinical sites through the University of Pennsylvania. Underpinning the\u0000SPARC.Science, Epilepsy.Science, and Pennsieve Discover portals, Pennsieve\u0000stores over 125 TB of scientific data, with 35 TB of data publicly available\u0000across more than 350 high-impact datasets. It adheres to the findable,\u0000accessible, interoperable, and reusable (FAIR) principles of data sharing and\u0000is recognized as one of the NIH-approved Data Repositories. By facilitating\u0000scientific data management, discovery, and analysis, Pennsieve fosters a robust\u0000and collaborative research ecosystem for neuroscience and beyond.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The large-scale deployment of Solidity smart contracts on the Ethereum mainnet has increasingly attracted financially-motivated attackers in recent years. A few now-infamous attacks in Ethereum's history includes DAO attack in 2016 (50 million dollars lost), Parity Wallet hack in 2017 (146 million dollars locked), Beautychain's token BEC in 2018 (900 million dollars market value fell to 0), and NFT gaming blockchain breach in 2022 ($600 million in Ether stolen). This paper presents a comprehensive investigation of the use of large language models (LLMs) and their capabilities in detecting OWASP Top Ten vulnerabilities in Solidity. We introduce a novel, class-balanced, structured, and labeled dataset named VulSmart, which we use to benchmark and compare the performance of open-source LLMs such as CodeLlama, Llama2, CodeT5 and Falcon, alongside closed-source models like GPT-3.5 Turbo and GPT-4o Mini. Our proposed SmartVD framework is rigorously tested against these models through extensive automated and manual evaluations, utilizing BLEU and ROUGE metrics to assess the effectiveness of vulnerability detection in smart contracts. We also explore three distinct prompting strategies-zero-shot, few-shot, and chain-of-thought-to evaluate the multi-class classification and generative capabilities of the SmartVD framework. Our findings reveal that SmartVD outperforms its open-source counterparts and even exceeds the performance of closed-source base models like GPT-3.5 and GPT-4 Mini. After fine-tuning, the closed-source models, GPT-3.5 Turbo and GPT-4o Mini, achieved remarkable performance with 99% accuracy in detecting vulnerabilities, 94% in identifying their types, and 98% in determining severity. Notably, SmartVD performs best with the `chain-of-thought' prompting technique, whereas the fine-tuned closed-source models excel with the `zero-shot' prompting approach.
{"title":"Detection Made Easy: Potentials of Large Language Models for Solidity Vulnerabilities","authors":"Md Tauseef Alam, Raju Halder, Abyayananda Maiti","doi":"arxiv-2409.10574","DOIUrl":"https://doi.org/arxiv-2409.10574","url":null,"abstract":"The large-scale deployment of Solidity smart contracts on the Ethereum\u0000mainnet has increasingly attracted financially-motivated attackers in recent\u0000years. A few now-infamous attacks in Ethereum's history includes DAO attack in\u00002016 (50 million dollars lost), Parity Wallet hack in 2017 (146 million dollars\u0000locked), Beautychain's token BEC in 2018 (900 million dollars market value fell\u0000to 0), and NFT gaming blockchain breach in 2022 ($600 million in Ether stolen).\u0000This paper presents a comprehensive investigation of the use of large language\u0000models (LLMs) and their capabilities in detecting OWASP Top Ten vulnerabilities\u0000in Solidity. We introduce a novel, class-balanced, structured, and labeled\u0000dataset named VulSmart, which we use to benchmark and compare the performance\u0000of open-source LLMs such as CodeLlama, Llama2, CodeT5 and Falcon, alongside\u0000closed-source models like GPT-3.5 Turbo and GPT-4o Mini. Our proposed SmartVD\u0000framework is rigorously tested against these models through extensive automated\u0000and manual evaluations, utilizing BLEU and ROUGE metrics to assess the\u0000effectiveness of vulnerability detection in smart contracts. We also explore\u0000three distinct prompting strategies-zero-shot, few-shot, and\u0000chain-of-thought-to evaluate the multi-class classification and generative\u0000capabilities of the SmartVD framework. Our findings reveal that SmartVD\u0000outperforms its open-source counterparts and even exceeds the performance of\u0000closed-source base models like GPT-3.5 and GPT-4 Mini. After fine-tuning, the\u0000closed-source models, GPT-3.5 Turbo and GPT-4o Mini, achieved remarkable\u0000performance with 99% accuracy in detecting vulnerabilities, 94% in identifying\u0000their types, and 98% in determining severity. Notably, SmartVD performs best\u0000with the `chain-of-thought' prompting technique, whereas the fine-tuned\u0000closed-source models excel with the `zero-shot' prompting approach.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the current NISQ-era, one of the major challenges faced by researchers and practitioners lies in figuring out how to combine quantum and classical computing in the most efficient and innovative way. In this paper, we present a mechanism coined as Quantum Initialization for Warehouse Optimization Problem that resorts to D-Wave's Quantum Annealer. The module has been specifically designed to be embedded into already existing classical software dedicated to the optimization of a real-world industrial problem. We preliminary tested the implemented mechanism through a two-phase experiment against the classical version of the software.
{"title":"Exploring Utility in a Real-World Warehouse Optimization Problem: Formulation Based on Quantun Annealers and Preliminary Results","authors":"Eneko Osaba, Esther Villar-Rodriguez, Antón Asla","doi":"arxiv-2409.09706","DOIUrl":"https://doi.org/arxiv-2409.09706","url":null,"abstract":"In the current NISQ-era, one of the major challenges faced by researchers and\u0000practitioners lies in figuring out how to combine quantum and classical\u0000computing in the most efficient and innovative way. In this paper, we present a\u0000mechanism coined as Quantum Initialization for Warehouse Optimization Problem\u0000that resorts to D-Wave's Quantum Annealer. The module has been specifically\u0000designed to be embedded into already existing classical software dedicated to\u0000the optimization of a real-world industrial problem. We preliminary tested the\u0000implemented mechanism through a two-phase experiment against the classical\u0000version of the software.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benny Wijaya, Kun Jiang, Mengmeng Yang, Tuopu Wen, Yunlong Wang, Xuewei Tang, Zheng Fu, Taohua Zhou, Diange Yang
Along with the rapid growth of autonomous vehicles (AVs), more and more demands are required for environment perception technology. Among others, HD mapping has become one of the more prominent roles in helping the vehicle realize essential tasks such as localization and path planning. While increasing research efforts have been directed toward HD Map development. However, a comprehensive overview of the overall HD map mapping and update framework is still lacking. This article introduces the development and current state of the algorithm involved in creating HD map mapping and its maintenance. As part of this study, the primary data preprocessing approach of processing raw data to information ready to feed for mapping and update purposes, semantic segmentation, and localization are also briefly reviewed. Moreover, the map taxonomy, ontology, and quality assessment are extensively discussed, the map data's general representation method is presented, and the mapping algorithm ranging from SLAM to transformers learning-based approaches are also discussed. The development of the HD map update algorithm, from change detection to the update methods, is also presented. Finally, the authors discuss possible future developments and the remaining challenges in HD map mapping and update technology. This paper simultaneously serves as a position paper and tutorial to those new to HD map mapping and update domains.
随着自动驾驶汽车(AV)的快速发展,对环境感知技术的要求也越来越高。其中,高清地图在帮助车辆实现定位和路径规划等基本任务方面的作用尤为突出。虽然越来越多的研究人员致力于高清地图的开发,但目前仍缺乏对整个高清地图绘制和更新框架工作的全面概述。本文介绍了创建高清地图制图及其维护所涉及的算法的发展和现状。作为这项研究的一部分,本文还简要回顾了将raw数据处理为可用于制图和更新目的的信息、语义分割和定位的主要数据预处理方法。此外,还广泛讨论了地图分类学、本体论和质量评估,介绍了地图数据的一般表示方法,并讨论了从 SLAM 到基于转换器学习方法的绘图算法。最后,作者讨论了高清地图制图和更新技术未来可能的发展和仍然面临的挑战。本文既是一篇立场论文,也是对高清地图制图和更新领域新手的指导。
{"title":"High Definition Map Mapping and Update: A General Overview and Future Directions","authors":"Benny Wijaya, Kun Jiang, Mengmeng Yang, Tuopu Wen, Yunlong Wang, Xuewei Tang, Zheng Fu, Taohua Zhou, Diange Yang","doi":"arxiv-2409.09726","DOIUrl":"https://doi.org/arxiv-2409.09726","url":null,"abstract":"Along with the rapid growth of autonomous vehicles (AVs), more and more\u0000demands are required for environment perception technology. Among others, HD\u0000mapping has become one of the more prominent roles in helping the vehicle\u0000realize essential tasks such as localization and path planning. While\u0000increasing research efforts have been directed toward HD Map development.\u0000However, a comprehensive overview of the overall HD map mapping and update\u0000framework is still lacking. This article introduces the development and current\u0000state of the algorithm involved in creating HD map mapping and its maintenance.\u0000As part of this study, the primary data preprocessing approach of processing\u0000raw data to information ready to feed for mapping and update purposes, semantic\u0000segmentation, and localization are also briefly reviewed. Moreover, the map\u0000taxonomy, ontology, and quality assessment are extensively discussed, the map\u0000data's general representation method is presented, and the mapping algorithm\u0000ranging from SLAM to transformers learning-based approaches are also discussed.\u0000The development of the HD map update algorithm, from change detection to the\u0000update methods, is also presented. Finally, the authors discuss possible future\u0000developments and the remaining challenges in HD map mapping and update\u0000technology. This paper simultaneously serves as a position paper and tutorial\u0000to those new to HD map mapping and update domains.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Complex quantum circuits are constituted by combinations of quantum subroutines. The computation is possible as long as the quantum data encoding is consistent throughout the circuit. Despite its fundamental importance, the formalization of quantum data encoding has never been addressed systematically so far. We formalize the concept of quantum data encoding, namely the format providing a representation of a data set through a quantum state, as a distinct abstract layer with respect to the associated data loading circuit. We survey existing encoding methods and their respective strategies for classical-to-quantum exact and approximate data loading, for the quantum-to-classical extraction of information from states, and for quantum-to-quantum encoding conversion. Next, we show how major quantum algorithms find a natural interpretation in terms of data loading. For instance, the Quantum Fourier Transform is described as a quantum encoding converter, while the Quantum Amplitude Estimation as an extraction routine. The new conceptual framework is exemplified by considering its application to quantum-based Monte Carlo simulations, thus showcasing the power of the proposed formalism for the description of complex quantum circuits. Indeed, the approach clarifies the structure of complex quantum circuits and enables their efficient design.
{"title":"Quantum data encoding as a distinct abstraction layer in the design of quantum circuits","authors":"Gabriele Agliardi, Enrico Prati","doi":"arxiv-2409.09339","DOIUrl":"https://doi.org/arxiv-2409.09339","url":null,"abstract":"Complex quantum circuits are constituted by combinations of quantum\u0000subroutines. The computation is possible as long as the quantum data encoding\u0000is consistent throughout the circuit. Despite its fundamental importance, the\u0000formalization of quantum data encoding has never been addressed systematically\u0000so far. We formalize the concept of quantum data encoding, namely the format\u0000providing a representation of a data set through a quantum state, as a distinct\u0000abstract layer with respect to the associated data loading circuit. We survey\u0000existing encoding methods and their respective strategies for\u0000classical-to-quantum exact and approximate data loading, for the\u0000quantum-to-classical extraction of information from states, and for\u0000quantum-to-quantum encoding conversion. Next, we show how major quantum\u0000algorithms find a natural interpretation in terms of data loading. For\u0000instance, the Quantum Fourier Transform is described as a quantum encoding\u0000converter, while the Quantum Amplitude Estimation as an extraction routine. The\u0000new conceptual framework is exemplified by considering its application to\u0000quantum-based Monte Carlo simulations, thus showcasing the power of the\u0000proposed formalism for the description of complex quantum circuits. Indeed, the\u0000approach clarifies the structure of complex quantum circuits and enables their\u0000efficient design.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrei Cosmin Redis, Mohammadreza Fani Sani, Bahram Zarrin, Andrea Burattin
Large Language Models (LLMs) have shown significant promise in plan generation. Yet, existing datasets often lack the complexity needed for advanced tool use scenarios - such as handling paraphrased query statements, supporting multiple languages, and managing actions that can be done in parallel. These scenarios are crucial for evaluating the evolving capabilities of LLMs in real-world applications. Moreover, current datasets don't enable the study of LLMs from a process perspective, particularly in scenarios where understanding typical behaviors and challenges in executing the same process under different conditions or formulations is crucial. To address these gaps, we present the ProcessTBench dataset, an extension of the TaskBench dataset specifically designed to evaluate LLMs within a process mining framework.
{"title":"ProcessTBench: An LLM Plan Generation Dataset for Process Mining","authors":"Andrei Cosmin Redis, Mohammadreza Fani Sani, Bahram Zarrin, Andrea Burattin","doi":"arxiv-2409.09191","DOIUrl":"https://doi.org/arxiv-2409.09191","url":null,"abstract":"Large Language Models (LLMs) have shown significant promise in plan\u0000generation. Yet, existing datasets often lack the complexity needed for\u0000advanced tool use scenarios - such as handling paraphrased query statements,\u0000supporting multiple languages, and managing actions that can be done in\u0000parallel. These scenarios are crucial for evaluating the evolving capabilities\u0000of LLMs in real-world applications. Moreover, current datasets don't enable the\u0000study of LLMs from a process perspective, particularly in scenarios where\u0000understanding typical behaviors and challenges in executing the same process\u0000under different conditions or formulations is crucial. To address these gaps,\u0000we present the ProcessTBench dataset, an extension of the TaskBench dataset\u0000specifically designed to evaluate LLMs within a process mining framework.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes an algorithm for synthesis of clock-follow-data designs that provides robustness against timing violations for RSFQ circuits while maintaining high performance and minimizing area costs. Since superconducting logic gates must be clocked, managing data flow is a challenging problem that often requires the insertion of many path balancing D Flips (DFFs) to properly sequence data, leading to a substantial increase in area. To address this challenge, we present an algorithm to insert DFFs into clock-follow-data RSFQ circuits that partially balances the delays within the circuit to achieve a target throughput while minimizing area. Our algorithm can account for expected timing variations and, by adjusting the bias of the clock network and clock frequency, we can mitigate unexpected timing violations post-fabrication. Quantifying the benefits of our approach with a benchmark suite with nominal delays, our designs offer an average 1.48x improvement in area delay product (ADP) over high frequency full path balancing (FPB) designs and a 2.07x improvement in ADP over the state of the art robust circuits provided by state-of-the-art (SOTA) multi-phase clocking solutions.
{"title":"Delay Balancing with Clock-Follow-Data: Optimizing Area Delay Trade-offs for Robust Rapid Single Flux Quantum Circuits","authors":"Robert S. Aviles, Phalgun G K, Peter A. Beerel","doi":"arxiv-2409.04944","DOIUrl":"https://doi.org/arxiv-2409.04944","url":null,"abstract":"This paper proposes an algorithm for synthesis of clock-follow-data designs\u0000that provides robustness against timing violations for RSFQ circuits while\u0000maintaining high performance and minimizing area costs. Since superconducting\u0000logic gates must be clocked, managing data flow is a challenging problem that\u0000often requires the insertion of many path balancing D Flips (DFFs) to properly\u0000sequence data, leading to a substantial increase in area. To address this\u0000challenge, we present an algorithm to insert DFFs into clock-follow-data RSFQ\u0000circuits that partially balances the delays within the circuit to achieve a\u0000target throughput while minimizing area. Our algorithm can account for expected\u0000timing variations and, by adjusting the bias of the clock network and clock\u0000frequency, we can mitigate unexpected timing violations post-fabrication.\u0000Quantifying the benefits of our approach with a benchmark suite with nominal\u0000delays, our designs offer an average 1.48x improvement in area delay product\u0000(ADP) over high frequency full path balancing (FPB) designs and a 2.07x\u0000improvement in ADP over the state of the art robust circuits provided by\u0000state-of-the-art (SOTA) multi-phase clocking solutions.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"282 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142214100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
3D promises a new dimension in composing systems by aggregating chips. Literally. While the most common uses are still tightly connected with its early forms as a packaging technology, new application domains have been emerging. As the underlying technology continues to evolve, the unique leverages of 3D have become increasingly appealing to a larger range of applications: from embedded mobile applications to servers and memory systems. In this paper we focus on the system-level implications of 3D technology, trying to differentiate the unique advantages that it provides to different market segments and applications.
三维技术通过将芯片聚合在一起,为系统构成带来了新的前景。尽管三维技术最常见的用途仍与其早期的封装技术密切相关,但新的应用领域正在不断涌现。随着底层技术的不断发展,3D 技术的独特优势对更广泛的应用领域越来越有吸引力:从嵌入式移动应用到服务器和内存系统。在本文中,我们将重点关注 3D 技术在系统层面的影响,试图区分 3D 技术为不同细分市场和应用提供的独特优势。
{"title":"3D System Design: A Case for Building Customized Modular Systems in 3D","authors":"Philip Emma, Eren Kurshan","doi":"arxiv-2409.09068","DOIUrl":"https://doi.org/arxiv-2409.09068","url":null,"abstract":"3D promises a new dimension in composing systems by aggregating chips.\u0000Literally. While the most common uses are still tightly connected with its\u0000early forms as a packaging technology, new application domains have been\u0000emerging. As the underlying technology continues to evolve, the unique\u0000leverages of 3D have become increasingly appealing to a larger range of\u0000applications: from embedded mobile applications to servers and memory systems.\u0000In this paper we focus on the system-level implications of 3D technology,\u0000trying to differentiate the unique advantages that it provides to different\u0000market segments and applications.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohsen Shirali, Mohammadreza Fani Sani, Zahra Ahmadi, Estefania Serral
The continuous flow of data collected by Internet of Things (IoT) devices, has revolutionised our ability to understand and interact with the world across various applications. However, this data must be prepared and transformed into event data before analysis can begin. In this paper, we shed light on the potential of leveraging Large Language Models (LLMs) in event abstraction and integration. Our approach aims to create event records from raw sensor readings and merge the logs from multiple IoT sources into a single event log suitable for further Process Mining applications. We demonstrate the capabilities of LLMs in event abstraction considering a case study for IoT application in elderly care and longitudinal health monitoring. The results, showing on average an accuracy of 90% in detecting high-level activities. These results highlight LLMs' promising potential in addressing event abstraction and integration challenges, effectively bridging the existing gap.
{"title":"LLM-based event abstraction and integration for IoT-sourced logs","authors":"Mohsen Shirali, Mohammadreza Fani Sani, Zahra Ahmadi, Estefania Serral","doi":"arxiv-2409.03478","DOIUrl":"https://doi.org/arxiv-2409.03478","url":null,"abstract":"The continuous flow of data collected by Internet of Things (IoT) devices,\u0000has revolutionised our ability to understand and interact with the world across\u0000various applications. However, this data must be prepared and transformed into\u0000event data before analysis can begin. In this paper, we shed light on the\u0000potential of leveraging Large Language Models (LLMs) in event abstraction and\u0000integration. Our approach aims to create event records from raw sensor readings\u0000and merge the logs from multiple IoT sources into a single event log suitable\u0000for further Process Mining applications. We demonstrate the capabilities of\u0000LLMs in event abstraction considering a case study for IoT application in\u0000elderly care and longitudinal health monitoring. The results, showing on\u0000average an accuracy of 90% in detecting high-level activities. These results\u0000highlight LLMs' promising potential in addressing event abstraction and\u0000integration challenges, effectively bridging the existing gap.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142214102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}