B. Yigit, Grace Li Zhang, Bing Li, Yiyu Shi, Ulf Schlichtmann
{"title":"Application of machine learning methods in post-silicon yield improvement","authors":"B. Yigit, Grace Li Zhang, Bing Li, Yiyu Shi, Ulf Schlichtmann","doi":"10.1109/SOCC.2017.8226049","DOIUrl":null,"url":null,"abstract":"In nanometer scale manufacturing, process variations have a significant impact on circuit performance. To handle them, post-silicon clock tuning buffers can be included into the circuit to balance timing budgets of neighboring critical paths. The state of the art is a sampling-based approach, in which an integer linear programming (ILP) problem must be solved for every sample. The runtime complexity of this approach is the number of samples multiplied by the required time for an ILP solution. Existing work tries to reduce the number of samples but still leaves the problem of a long runtime unsolved. In this paper, we propose a machine learning approach to reduce the runtime by learning the positions and sizes of post-silicon tuning buffers. Experimental results demonstrate that we can predict buffer locations and sizes with a very good accuracy (90% and higher) and achieve a significant yield improvement (up to 18.8%) with a significant speed-up (up to almost 20 times) compared to existing work.","PeriodicalId":366264,"journal":{"name":"2017 30th IEEE International System-on-Chip Conference (SOCC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 30th IEEE International System-on-Chip Conference (SOCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SOCC.2017.8226049","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
In nanometer scale manufacturing, process variations have a significant impact on circuit performance. To handle them, post-silicon clock tuning buffers can be included into the circuit to balance timing budgets of neighboring critical paths. The state of the art is a sampling-based approach, in which an integer linear programming (ILP) problem must be solved for every sample. The runtime complexity of this approach is the number of samples multiplied by the required time for an ILP solution. Existing work tries to reduce the number of samples but still leaves the problem of a long runtime unsolved. In this paper, we propose a machine learning approach to reduce the runtime by learning the positions and sizes of post-silicon tuning buffers. Experimental results demonstrate that we can predict buffer locations and sizes with a very good accuracy (90% and higher) and achieve a significant yield improvement (up to 18.8%) with a significant speed-up (up to almost 20 times) compared to existing work.