{"title":"Logic Synthesis Optimization with Predictive Self-Supervision via Causal Transformers","authors":"Raika Karimi, Faezeh Faez, Yingxue Zhang, Xing Li, Lei Chen, Mingxuan Yuan, Mahdi Biparva","doi":"arxiv-2409.10653","DOIUrl":null,"url":null,"abstract":"Contemporary hardware design benefits from the abstraction provided by\nhigh-level logic gates, streamlining the implementation of logic circuits.\nLogic Synthesis Optimization (LSO) operates at one level of abstraction within\nthe Electronic Design Automation (EDA) workflow, targeting improvements in\nlogic circuits with respect to performance metrics such as size and speed in\nthe final layout. Recent trends in the field show a growing interest in\nleveraging Machine Learning (ML) for EDA, notably through ML-guided logic\nsynthesis utilizing policy-based Reinforcement Learning (RL) methods.Despite\nthese advancements, existing models face challenges such as overfitting and\nlimited generalization, attributed to constrained public circuits and the\nexpressiveness limitations of graph encoders. To address these hurdles, and\ntackle data scarcity issues, we introduce LSOformer, a novel approach\nharnessing Autoregressive transformer models and predictive SSL to predict the\ntrajectory of Quality of Results (QoR). LSOformer integrates cross-attention\nmodules to merge insights from circuit graphs and optimization sequences,\nthereby enhancing prediction accuracy for QoR metrics. Experimental studies\nvalidate the effectiveness of LSOformer, showcasing its superior performance\nover baseline architectures in QoR prediction tasks, where it achieves\nimprovements of 5.74%, 4.35%, and 17.06% on the EPFL, OABCD, and proprietary\ncircuits datasets, respectively, in inductive setup.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":"21 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10653","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Contemporary hardware design benefits from the abstraction provided by
high-level logic gates, streamlining the implementation of logic circuits.
Logic Synthesis Optimization (LSO) operates at one level of abstraction within
the Electronic Design Automation (EDA) workflow, targeting improvements in
logic circuits with respect to performance metrics such as size and speed in
the final layout. Recent trends in the field show a growing interest in
leveraging Machine Learning (ML) for EDA, notably through ML-guided logic
synthesis utilizing policy-based Reinforcement Learning (RL) methods.Despite
these advancements, existing models face challenges such as overfitting and
limited generalization, attributed to constrained public circuits and the
expressiveness limitations of graph encoders. To address these hurdles, and
tackle data scarcity issues, we introduce LSOformer, a novel approach
harnessing Autoregressive transformer models and predictive SSL to predict the
trajectory of Quality of Results (QoR). LSOformer integrates cross-attention
modules to merge insights from circuit graphs and optimization sequences,
thereby enhancing prediction accuracy for QoR metrics. Experimental studies
validate the effectiveness of LSOformer, showcasing its superior performance
over baseline architectures in QoR prediction tasks, where it achieves
improvements of 5.74%, 4.35%, and 17.06% on the EPFL, OABCD, and proprietary
circuits datasets, respectively, in inductive setup.