The increase in global trade, the impact of COVID-19, and the tightening of environmental and safety regulations have brought significant changes to the maritime transportation market. To address these challenges, the port logistics sector is rapidly adopting advanced technologies such as big data, Internet of Things, and AI. However, despite these efforts, solving several issues related to productivity, environment, and safety in the port logistics sector requires collaboration among various stakeholders. In this study, we introduce an AI-based port logistics metaverse framework (PLMF) that facilitates communication, data sharing, and decision-making among diverse stakeholders in port logistics. The developed PLMF includes 11 AI-based metaverse content modules related to productivity, environment, and safety, enabling the monitoring, simulation, and decision making of real port logistics processes. Examples of these modules include the prediction of expected time of arrival, dynamic port operation planning, monitoring and prediction of ship fuel consumption and port equipment emissions, and detection and monitoring of hazardous ship routes and accidents between workers and port equipment. We conducted a case study using historical data from Busan Port to analyze the effectiveness of the PLMF. By predicting the expected arrival time of ships within the PLMF and optimizing port operations accordingly, we observed that the framework could generate additional direct revenue of approximately 7.3 million dollars annually, along with a 79% improvement in ship punctuality, resulting in certain environmental benefits for the port. These findings indicate that PLMF not only provides a platform for various stakeholders in port logistics to participate and collaborate but also significantly enhances the accuracy and sustainability of decision-making in port logistics through AI-based simulations.
{"title":"Artificial Intelligence-based Smart Port Logistics Metaverse for Enhancing Productivity, Environment, and Safety in Port Logistics: A Case Study of Busan Port","authors":"Sunghyun Sim, Dohee Kim, Kikun Park, Hyerim Bae","doi":"arxiv-2409.10519","DOIUrl":"https://doi.org/arxiv-2409.10519","url":null,"abstract":"The increase in global trade, the impact of COVID-19, and the tightening of\u0000environmental and safety regulations have brought significant changes to the\u0000maritime transportation market. To address these challenges, the port logistics\u0000sector is rapidly adopting advanced technologies such as big data, Internet of\u0000Things, and AI. However, despite these efforts, solving several issues related\u0000to productivity, environment, and safety in the port logistics sector requires\u0000collaboration among various stakeholders. In this study, we introduce an\u0000AI-based port logistics metaverse framework (PLMF) that facilitates\u0000communication, data sharing, and decision-making among diverse stakeholders in\u0000port logistics. The developed PLMF includes 11 AI-based metaverse content\u0000modules related to productivity, environment, and safety, enabling the\u0000monitoring, simulation, and decision making of real port logistics processes.\u0000Examples of these modules include the prediction of expected time of arrival,\u0000dynamic port operation planning, monitoring and prediction of ship fuel\u0000consumption and port equipment emissions, and detection and monitoring of\u0000hazardous ship routes and accidents between workers and port equipment. We\u0000conducted a case study using historical data from Busan Port to analyze the\u0000effectiveness of the PLMF. By predicting the expected arrival time of ships\u0000within the PLMF and optimizing port operations accordingly, we observed that\u0000the framework could generate additional direct revenue of approximately 7.3\u0000million dollars annually, along with a 79% improvement in ship punctuality,\u0000resulting in certain environmental benefits for the port. These findings\u0000indicate that PLMF not only provides a platform for various stakeholders in\u0000port logistics to participate and collaborate but also significantly enhances\u0000the accuracy and sustainability of decision-making in port logistics through\u0000AI-based simulations.","PeriodicalId":501310,"journal":{"name":"arXiv - CS - Other Computer Science","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite being legally equivalent to handwritten signatures, Qualified Electronic Signatures (QES) have not yet achieved significant market success. QES offer substantial potential for reducing reliance on paper-based contracts, enabling secure digital applications, and standardizing public services. However, there is limited information on their usability despite the extensive range of use cases. To address this gap, we systematize QES use cases and categorize the system designs implemented to support these use cases, emphasizing the necessity to evaluate their respective strengths and weaknesses through usability studies. Additionally, we present findings from cognitive walkthroughs conducted on use cases across four different QES systems. We anticipate that this work will serve as a foundation for a significant expansion of research into the usability of Qualified Electronic Signatures.
{"title":"Evaluating the Usability of Qualified Electronic Signatures: Systematized Use Cases and Design Paradigms","authors":"Mustafa Cagal, Kemal Bicakci","doi":"arxiv-2408.14349","DOIUrl":"https://doi.org/arxiv-2408.14349","url":null,"abstract":"Despite being legally equivalent to handwritten signatures, Qualified\u0000Electronic Signatures (QES) have not yet achieved significant market success.\u0000QES offer substantial potential for reducing reliance on paper-based contracts,\u0000enabling secure digital applications, and standardizing public services.\u0000However, there is limited information on their usability despite the extensive\u0000range of use cases. To address this gap, we systematize QES use cases and\u0000categorize the system designs implemented to support these use cases,\u0000emphasizing the necessity to evaluate their respective strengths and weaknesses\u0000through usability studies. Additionally, we present findings from cognitive\u0000walkthroughs conducted on use cases across four different QES systems. We\u0000anticipate that this work will serve as a foundation for a significant\u0000expansion of research into the usability of Qualified Electronic Signatures.","PeriodicalId":501310,"journal":{"name":"arXiv - CS - Other Computer Science","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhi Li, Lei Zhang, Junyi Xin, Jianfei He, Yan Li, Zhenjun Ma, Qi Sun
The data circulation is a complex scenario involving a large number of participants and different types of requirements, which not only has to comply with the laws and regulations, but also faces multiple challenges in technical and business areas. In order to systematically and comprehensively address these issues, it is essential to have a comprehensive and profound understanding of 'data circulation'. The traditional analysis method tends to proceed based on the traditional circulation model of commodities, that is, tangible objects, which has some defects and shortcomings, and tends to be a formalized approach, which is faced numerous challenges in practice. This paper analyzes the circulation of data with a philosophical approach, obtains the new explication of data and executing entity, and provides a new definition of the concepts of data utilization and data key stakeholders (objects). At the same time, it puts forward the idea of ``data alienation'', and constructs a new interpretive framework of ``data circulation''. Based on the framework of this interpretation, it is clearly proposed that ``data alienation'' is the core of ``data circulation'', benefit distribution is the driving force, and legal compliance is the foundation, and further discussed the three modes of ``data circulation''. It further discusses the three modes of ``data circulation''. It is pointed out that ``data circulation'' is different from traditional ``commodity circulation''. To achieve ``data circulation'',a comprehensive information infrastructure needs to be established. from a theoretical point of view, it lays a solid foundation for the development of ``data circulation''.
{"title":"A Brief Discussion on the Philosophical Principles and Development Directions of Data Circulation","authors":"Zhi Li, Lei Zhang, Junyi Xin, Jianfei He, Yan Li, Zhenjun Ma, Qi Sun","doi":"arxiv-2407.16719","DOIUrl":"https://doi.org/arxiv-2407.16719","url":null,"abstract":"The data circulation is a complex scenario involving a large number of\u0000participants and different types of requirements, which not only has to comply\u0000with the laws and regulations, but also faces multiple challenges in technical\u0000and business areas. In order to systematically and comprehensively address\u0000these issues, it is essential to have a comprehensive and profound\u0000understanding of 'data circulation'. The traditional analysis method tends to proceed based on the traditional\u0000circulation model of commodities, that is, tangible objects, which has some\u0000defects and shortcomings, and tends to be a formalized approach, which is faced\u0000numerous challenges in practice. This paper analyzes the circulation of data\u0000with a philosophical approach, obtains the new explication of data and\u0000executing entity, and provides a new definition of the concepts of data\u0000utilization and data key stakeholders (objects). At the same time, it puts\u0000forward the idea of ``data alienation'', and constructs a new interpretive\u0000framework of ``data circulation''. Based on the framework of this interpretation, it is clearly proposed that\u0000``data alienation'' is the core of ``data circulation'', benefit distribution\u0000is the driving force, and legal compliance is the foundation, and further\u0000discussed the three modes of ``data circulation''. It further discusses the\u0000three modes of ``data circulation''. It is pointed out that ``data\u0000circulation'' is different from traditional ``commodity circulation''. To\u0000achieve ``data circulation'',a comprehensive information infrastructure needs\u0000to be established. from a theoretical point of view, it lays a solid foundation\u0000for the development of ``data circulation''.","PeriodicalId":501310,"journal":{"name":"arXiv - CS - Other Computer Science","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Koosha Shirouyeh, Andrea Schiffauerova, Ashkan Ebadi
Star scientists are highly influential researchers who have made significant contributions to their field, gained widespread recognition, and often attracted substantial research funding. They are critical for the advancement of science and innovation, and they have a significant influence on the transfer of knowledge and technology to industry. Identifying potential star scientists before their performance becomes outstanding is important for recruitment, collaboration, networking, or research funding decisions. Using machine learning techniques, this study proposes a model to predict star scientists in the field of artificial intelligence while highlighting features related to their success. Our results confirm that rising stars follow different patterns compared to their non-rising stars counterparts in almost all the early-career features. We also found that certain features such as gender and ethnic diversity play important roles in scientific collaboration and that they can significantly impact an author's career development and success. The most important features in predicting star scientists in the field of artificial intelligence were the number of articles, group discipline diversity, and weighted degree centrality. The proposed approach offers valuable insights for researchers, practitioners, and funding agencies interested in identifying and supporting talented researchers.
{"title":"Predicting Star Scientists in the Field of Artificial Intelligence: A Machine Learning Approach","authors":"Koosha Shirouyeh, Andrea Schiffauerova, Ashkan Ebadi","doi":"arxiv-2407.14559","DOIUrl":"https://doi.org/arxiv-2407.14559","url":null,"abstract":"Star scientists are highly influential researchers who have made significant\u0000contributions to their field, gained widespread recognition, and often\u0000attracted substantial research funding. They are critical for the advancement\u0000of science and innovation, and they have a significant influence on the\u0000transfer of knowledge and technology to industry. Identifying potential star\u0000scientists before their performance becomes outstanding is important for\u0000recruitment, collaboration, networking, or research funding decisions. Using\u0000machine learning techniques, this study proposes a model to predict star\u0000scientists in the field of artificial intelligence while highlighting features\u0000related to their success. Our results confirm that rising stars follow\u0000different patterns compared to their non-rising stars counterparts in almost\u0000all the early-career features. We also found that certain features such as\u0000gender and ethnic diversity play important roles in scientific collaboration\u0000and that they can significantly impact an author's career development and\u0000success. The most important features in predicting star scientists in the field\u0000of artificial intelligence were the number of articles, group discipline\u0000diversity, and weighted degree centrality. The proposed approach offers\u0000valuable insights for researchers, practitioners, and funding agencies\u0000interested in identifying and supporting talented researchers.","PeriodicalId":501310,"journal":{"name":"arXiv - CS - Other Computer Science","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141772352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Buildings contain electro-mechanical systems that ensure the occupants' comfort, health, and safety. The functioning of these systems is automated through control programs, which are often available as reusable artifacts in a software library. However, matching these reusable control programs to the installed technical systems requires manual effort and adds engineering cost. In this article, we show that such matching can be accomplished fully automatically through logical rules and based on the creation of semantic relationships between descriptions of emph{physical processes} and descriptions of technical systems and control programs. For this purpose, we propose a high-level bridging ontology that enables the desired rule-based matching and equips digital twins of the technical systems with the required knowledge about the underlying physical processes in a self-contained manner. We evaluated our approach in a real-life building automation project with a total of 34 deployed air handling units. Our data show that rules based on our bridging ontology enabled the system to infer the suitable choice of control programs automatically in more than 90% of the cases while avoiding almost an hour of manual work for each such match.
{"title":"A Match Made in Semantics: Physics-infused Digital Twins for Smart Building Automation","authors":"Ganesh Ramanathan, Simon Mayer","doi":"arxiv-2406.13247","DOIUrl":"https://doi.org/arxiv-2406.13247","url":null,"abstract":"Buildings contain electro-mechanical systems that ensure the occupants'\u0000comfort, health, and safety. The functioning of these systems is automated\u0000through control programs, which are often available as reusable artifacts in a\u0000software library. However, matching these reusable control programs to the\u0000installed technical systems requires manual effort and adds engineering cost.\u0000In this article, we show that such matching can be accomplished fully\u0000automatically through logical rules and based on the creation of semantic\u0000relationships between descriptions of emph{physical processes} and\u0000descriptions of technical systems and control programs. For this purpose, we\u0000propose a high-level bridging ontology that enables the desired rule-based\u0000matching and equips digital twins of the technical systems with the required\u0000knowledge about the underlying physical processes in a self-contained manner.\u0000We evaluated our approach in a real-life building automation project with a\u0000total of 34 deployed air handling units. Our data show that rules based on our\u0000bridging ontology enabled the system to infer the suitable choice of control\u0000programs automatically in more than 90% of the cases while avoiding almost an\u0000hour of manual work for each such match.","PeriodicalId":501310,"journal":{"name":"arXiv - CS - Other Computer Science","volume":"50 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141500573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Digital transformation (DX) has recently become a pressing issue for many companies as the latest digital technologies, such as artificial intelligence and the Internet of Things, can be easily utilized. However, devising new business models is not easy for compa-nies, though they can improve their operations through digital technologies. Thus, business model design support methods are needed by people who lack digital tech-nology expertise. In contrast, large language models (LLMs) represented by ChatGPT and natural language processing utilizing LLMs have been developed revolutionarily. A business model design support system that utilizes these technologies has great potential. However, research on this area is scant. Accordingly, this study proposes an LLM-based method for comparing and analyzing similar companies from different business do-mains as a first step toward business model design support utilizing LLMs. This method can support idea generation in digital business model design.
{"title":"Digital Business Model Analysis Using a Large Language Model","authors":"Masahiro Watanabe, Naoshi Uchihira","doi":"arxiv-2406.05741","DOIUrl":"https://doi.org/arxiv-2406.05741","url":null,"abstract":"Digital transformation (DX) has recently become a pressing issue for many\u0000companies as the latest digital technologies, such as artificial intelligence\u0000and the Internet of Things, can be easily utilized. However, devising new\u0000business models is not easy for compa-nies, though they can improve their\u0000operations through digital technologies. Thus, business model design support\u0000methods are needed by people who lack digital tech-nology expertise. In\u0000contrast, large language models (LLMs) represented by ChatGPT and natural\u0000language processing utilizing LLMs have been developed revolutionarily. A\u0000business model design support system that utilizes these technologies has great\u0000potential. However, research on this area is scant. Accordingly, this study\u0000proposes an LLM-based method for comparing and analyzing similar companies from\u0000different business do-mains as a first step toward business model design\u0000support utilizing LLMs. This method can support idea generation in digital\u0000business model design.","PeriodicalId":501310,"journal":{"name":"arXiv - CS - Other Computer Science","volume":"95 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141508555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the biological evolution of low-latency natural neural networks for short-term survival, and its parallels in the development of low latency high-performance Central Processing Unit in computer design and architecture. The necessity of accurate high-quality display of motion picture led to the special processing units known as the GPU, just as how special visual cortex regions of animals produced such low-latency computational capacity. The human brain, especially considered as nothing but a scaled-up version of a primate brain evolved in response to genomic bottleneck, producing a brain that is trainable and prunable by society, and as a further extension, invents language, writing and storage of narratives displaced in time and space. We conclude that this modern digital invention of social media and the archived collective common corpus has further evolved from just simple CPU-based low-latency fast retrieval to high-throughput parallel processing of data using GPUs to train Attention based Deep Learning Neural Networks producing Generative AI with aspects like toxicity, bias, memorization, hallucination, with intriguing close parallels in humans and their society. We show how this paves the way for constructive approaches to eliminating such drawbacks from human society and its proxy and collective large-scale mirror, the Generative AI of the LLMs.
我们研究了低延迟自然神经网络的生物进化以实现短期生存,以及计算机设计和体系结构中低延迟高性能中央处理器的发展。人脑,尤其被认为是灵长类动物大脑的放大版,是在基因组瓶颈下进化而来的,它产生了一个可被社会训练和修剪的大脑,并作为进一步的延伸,发明了语言、书写和存储在时间和空间中移动的叙述。我们的结论是,社交媒体和归档的共同语料库这一现代数字发明,已经从简单的基于 CPU 的低延迟快速检索,进一步发展到使用 GPU 对数据进行高吞吐量并行处理,以训练基于注意力的深度学习神经网络,产生了具有毒性、偏差、记忆、幻觉等方面的生成式人工智能,与人类及其社会有着惊人的相似之处。我们展示了这如何为消除人类社会及其代理和集体大规模镜像--LLMs 的生成式人工智能--中的这些弊端铺平了建设性的道路。
{"title":"Genetic Bottleneck and the Emergence of High Intelligence by Scaling-out and High Throughput","authors":"Arifa Khan, Saravanan P, Venkatesan S. K.","doi":"arxiv-2407.08743","DOIUrl":"https://doi.org/arxiv-2407.08743","url":null,"abstract":"We study the biological evolution of low-latency natural neural networks for\u0000short-term survival, and its parallels in the development of low latency\u0000high-performance Central Processing Unit in computer design and architecture.\u0000The necessity of accurate high-quality display of motion picture led to the\u0000special processing units known as the GPU, just as how special visual cortex\u0000regions of animals produced such low-latency computational capacity. The human\u0000brain, especially considered as nothing but a scaled-up version of a primate\u0000brain evolved in response to genomic bottleneck, producing a brain that is\u0000trainable and prunable by society, and as a further extension, invents\u0000language, writing and storage of narratives displaced in time and space. We\u0000conclude that this modern digital invention of social media and the archived\u0000collective common corpus has further evolved from just simple CPU-based\u0000low-latency fast retrieval to high-throughput parallel processing of data using\u0000GPUs to train Attention based Deep Learning Neural Networks producing\u0000Generative AI with aspects like toxicity, bias, memorization, hallucination,\u0000with intriguing close parallels in humans and their society. We show how this\u0000paves the way for constructive approaches to eliminating such drawbacks from\u0000human society and its proxy and collective large-scale mirror, the Generative\u0000AI of the LLMs.","PeriodicalId":501310,"journal":{"name":"arXiv - CS - Other Computer Science","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141719880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given the high advances of large language models (LLM) it is of vital importance to study their behaviors and apply their utility in all kinds of scientific fields. Psychology has been, in recent years, poorly approached using novel computational tools. One of the reasons is the high complexity of the data required for a proper analysis. Moreover, psychology, with a focus on psychometry, has few datasets available for analysis and artificial intelligence usage. Because of these facts, this study introduces a synthethic database of short essays labeled based on the five factor model (FFM) of personality traits.
{"title":"Big5PersonalityEssays: Introducing a Novel Synthetic Generated Dataset Consisting of Short State-of-Consciousness Essays Annotated Based on the Five Factor Model of Personality","authors":"Iustin Floroiu","doi":"arxiv-2407.17586","DOIUrl":"https://doi.org/arxiv-2407.17586","url":null,"abstract":"Given the high advances of large language models (LLM) it is of vital\u0000importance to study their behaviors and apply their utility in all kinds of\u0000scientific fields. Psychology has been, in recent years, poorly approached\u0000using novel computational tools. One of the reasons is the high complexity of\u0000the data required for a proper analysis. Moreover, psychology, with a focus on\u0000psychometry, has few datasets available for analysis and artificial\u0000intelligence usage. Because of these facts, this study introduces a synthethic\u0000database of short essays labeled based on the five factor model (FFM) of\u0000personality traits.","PeriodicalId":501310,"journal":{"name":"arXiv - CS - Other Computer Science","volume":"71 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141772354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The floorplanning of Systems-on-a-Chip (SoCs) and of chip sub- systems is a crucial step in the physical design flow as it determines the optimal shapes and locations of the blocks that make up the system. Simulated Annealing (SA) has been the method of choice for tackling classical floorplanning problems where the objective is to minimize wire-length and the total placement area. The goal in industry-relevant floorplanning problems, however, is not only to minimize area and wire-length, but to do that while respecting hard placement constraints that specify the general area and/or the specific locations for the placement of some blocks. We show that simply incorporating these constraints into the SA objective function leads to sub-optimal, and often illegal, solutions. We propose the Constraints-Aware Simulated Annealing (CA-SA) method and show that it strongly outperforms vanilla SA in floorplanning problems with hard placement constraints. We developed a new floorplan- ning tool on top of CA-SA: PARSAC (Parallel Simulated Annealing with Constraints). PARSAC is an efficient, easy-to-use, and mas- sively parallel floorplanner. Unlike current SA-based or learning- based floorplanning tools that cannot effectively incorporate hard placement-constraints, PARSAC can quickly construct the Pareto- optimal legal solutions front for constrained floorplanning problems. PARSAC also outperforms traditional SA on legacy floorplanning benchmarks. PARSAC is available as an open-source repository for researchers to replicate and build on our result.
芯片上系统(SoC)和芯片子系统的平面规划是物理设计流程中至关重要的一步,因为它决定了组成系统的模块的最佳形状和位置。模拟退火(SA)一直是解决经典平面规划问题的首选方法,其目标是最大限度地减少线长和总放置面积。然而,工业相关平面规划问题的目标不仅是最大限度地减少面积和线长,而且还要遵守硬放置约束,这些约束规定了某些模块的总体面积和/或特定放置位置。我们的研究表明,简单地将这些约束条件纳入 SA 目标函数会导致次优解,而且往往是非法解。我们提出了 "约束感知模拟退火"(CA-SA)方法,并证明该方法在处理无位置约束的平面规划问题时,性能大大优于香草退火法。我们在 CA-SA 的基础上开发了一种新的平面规划工具:PARSAC(带约束条件的并行模拟退火)。PARSAC 是一种高效、易用和大规模并行的平面规划工具。PARSAC 在传统平面规划基准上的表现也优于传统的 SA。PARSAC 是一个开放源代码库,可供研究人员在我们的成果基础上进行复制和构建。
{"title":"PARSAC: Fast, Human-quality Floorplanning for Modern SoCs with Complex Design Constraints","authors":"Hesham Mostafa, Uday Mallappa, Mikhail Galkin, Mariano Phielipp, Somdeb Majumdar","doi":"arxiv-2405.05495","DOIUrl":"https://doi.org/arxiv-2405.05495","url":null,"abstract":"The floorplanning of Systems-on-a-Chip (SoCs) and of chip sub- systems is a\u0000crucial step in the physical design flow as it determines the optimal shapes\u0000and locations of the blocks that make up the system. Simulated Annealing (SA)\u0000has been the method of choice for tackling classical floorplanning problems\u0000where the objective is to minimize wire-length and the total placement area.\u0000The goal in industry-relevant floorplanning problems, however, is not only to\u0000minimize area and wire-length, but to do that while respecting hard placement\u0000constraints that specify the general area and/or the specific locations for the\u0000placement of some blocks. We show that simply incorporating these constraints\u0000into the SA objective function leads to sub-optimal, and often illegal,\u0000solutions. We propose the Constraints-Aware Simulated Annealing (CA-SA) method\u0000and show that it strongly outperforms vanilla SA in floorplanning problems with\u0000hard placement constraints. We developed a new floorplan- ning tool on top of\u0000CA-SA: PARSAC (Parallel Simulated Annealing with Constraints). PARSAC is an\u0000efficient, easy-to-use, and mas- sively parallel floorplanner. Unlike current\u0000SA-based or learning- based floorplanning tools that cannot effectively\u0000incorporate hard placement-constraints, PARSAC can quickly construct the\u0000Pareto- optimal legal solutions front for constrained floorplanning problems.\u0000PARSAC also outperforms traditional SA on legacy floorplanning benchmarks.\u0000PARSAC is available as an open-source repository for researchers to replicate\u0000and build on our result.","PeriodicalId":501310,"journal":{"name":"arXiv - CS - Other Computer Science","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140940143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The iterative development process is a framework used to design products and applications across a wide range of domains. It centers around building prototypes, testing them, and updating based on the test results. We discuss how we applied this technique to create Fractal Emergence, an interactive piece of mathematical art.
{"title":"Applying the Iterative Development Process: The Creation of Fractal Emergence","authors":"Christopher R. H. Hanusa, Eric Vergo","doi":"arxiv-2405.04544","DOIUrl":"https://doi.org/arxiv-2405.04544","url":null,"abstract":"The iterative development process is a framework used to design products and\u0000applications across a wide range of domains. It centers around building\u0000prototypes, testing them, and updating based on the test results. We discuss\u0000how we applied this technique to create Fractal Emergence, an interactive piece\u0000of mathematical art.","PeriodicalId":501310,"journal":{"name":"arXiv - CS - Other Computer Science","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140940083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}