{"title":"Session details: PD for Reliability and Adaptability","authors":"Shuai Li","doi":"10.1145/3251203","DOIUrl":"https://doi.org/10.1145/3251203","url":null,"abstract":"","PeriodicalId":272036,"journal":{"name":"Proceedings of the 2016 on International Symposium on Physical Design","volume":"47 23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122214133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The performance growth of conventional VLSI circuits is seriously hampered by various variation effects and the fundamental limit of chip power density. Adaptive circuit design is recognized as a power-efficient approach to tackling the variation challenge. However, it tends to entail large area overhead if not carefully designed. This work studies how to reduce the overhead by forming adaptivity blocks considering both timing and spatial proximity among logic cells. The proximity optimization consists of timing and location aware cell clustering and incremental placement enforcing the clusters. Experiments are performed on the ICCAD 2014 benchmark circuits, which include case of near one million cells. Compared to alternative methods, our approach achieves 1/4 to 3/4 area overhead reduction with an average of 0.6% wirelength overhead, while retains about the same timing yield and power.
{"title":"Proximity Optimization for Adaptive Circuit Design","authors":"Ang Lu, Hao He, Jiang Hu","doi":"10.1145/2872334.2872354","DOIUrl":"https://doi.org/10.1145/2872334.2872354","url":null,"abstract":"The performance growth of conventional VLSI circuits is seriously hampered by various variation effects and the fundamental limit of chip power density. Adaptive circuit design is recognized as a power-efficient approach to tackling the variation challenge. However, it tends to entail large area overhead if not carefully designed. This work studies how to reduce the overhead by forming adaptivity blocks considering both timing and spatial proximity among logic cells. The proximity optimization consists of timing and location aware cell clustering and incremental placement enforcing the clusters. Experiments are performed on the ICCAD 2014 benchmark circuits, which include case of near one million cells. Compared to alternative methods, our approach achieves 1/4 to 3/4 area overhead reduction with an average of 0.6% wirelength overhead, while retains about the same timing yield and power.","PeriodicalId":272036,"journal":{"name":"Proceedings of the 2016 on International Symposium on Physical Design","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130499399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At 7nm and beyond, designers need to support scaling by identifying the most optimal patterning schemes for their designs. Moreover, designers can actively help by exploring scaling options that do not necessarily require aggressive pitch scaling. In this talk we will illustrate how design technology co-optimization can help achieving the expected Moore's law scaling; how optimizing device performance can lead to smaller standard cells; how the metal interconnect stack needs to be adjusted for unidirectional metals and how a vertical transistor can shift design paradigms. This paper demonstrates that scaling has become a joint design-technology co-optimization effort between process technology and design specialists, that expands way beyond just patterning enabled dimensional scaling.
{"title":"Scaling Beyond 7nm: Design-Technology Co-optimization at the Rescue","authors":"J. Ryckaert","doi":"10.1145/2872334.2893446","DOIUrl":"https://doi.org/10.1145/2872334.2893446","url":null,"abstract":"At 7nm and beyond, designers need to support scaling by identifying the most optimal patterning schemes for their designs. Moreover, designers can actively help by exploring scaling options that do not necessarily require aggressive pitch scaling. In this talk we will illustrate how design technology co-optimization can help achieving the expected Moore's law scaling; how optimizing device performance can lead to smaller standard cells; how the metal interconnect stack needs to be adjusted for unidirectional metals and how a vertical transistor can shift design paradigms. This paper demonstrates that scaling has become a joint design-technology co-optimization effort between process technology and design specialists, that expands way beyond just patterning enabled dimensional scaling.","PeriodicalId":272036,"journal":{"name":"Proceedings of the 2016 on International Symposium on Physical Design","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121625405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
State-of-the-art FPGA design has become a very complex process primarily due to the aggressive timing requirements of the designs. Designers spend significant amount of time and effort trying to close the timing on their latest designs. In that timing closure methodology, Physical Synthesis plays a key role to boost the design performance. In traditional approaches, user performs placement followed by physical synthesis. As the design complexity increases, physical synthesis cannot perform all the optimization steps due to the physical constraints imposed by the placement operation. In this work, we propose an interactive methodology to perform physical synthesis in the pre-placement stage of the FPGA timing closure flow. The approach will work in two iterations of the design flow. In the first iteration, the designer will perform the regular post-placement physical synthesis operation on the design. That phase will automatically write a replayable-file which will contain information about all the optimization actions. That file also contains all the attempted optimization moves what physical synthesis deemed beneficial from QoR perspective, but was not able to accept due to the physical constraint. In the second iteration of the design flow, the designer will perform all those physical synthesis optimizations by importing the replayable file in the pre-placement stage. In addition to performing the physical synthesis flow's changes, it also performs the optimizations that were not possible in the traditional physical synthesis flow. After these changes are made in the logical stage of the design flow, the crucial placement step can adapt to the optimized/better netlist structure. As a result, this approach will greatly help the users reach their challenging timing closure goal. We have evaluated the effectiveness and performance of our proposed approach on a large set of industrial designs. All these designs were targeted towards the latest Xilinx Ultrascale™ devices. Our experimental data indicates that the proposed approach improves the design performance by 4% to 5%, on an average.
{"title":"An Interactive Physical Synthesis Methodology for High-Frequency FPGA Designs","authors":"Sabya Das, R. Aggarwal, Zhiyong Wang","doi":"10.1145/2872334.2872340","DOIUrl":"https://doi.org/10.1145/2872334.2872340","url":null,"abstract":"State-of-the-art FPGA design has become a very complex process primarily due to the aggressive timing requirements of the designs. Designers spend significant amount of time and effort trying to close the timing on their latest designs. In that timing closure methodology, Physical Synthesis plays a key role to boost the design performance. In traditional approaches, user performs placement followed by physical synthesis. As the design complexity increases, physical synthesis cannot perform all the optimization steps due to the physical constraints imposed by the placement operation. In this work, we propose an interactive methodology to perform physical synthesis in the pre-placement stage of the FPGA timing closure flow. The approach will work in two iterations of the design flow. In the first iteration, the designer will perform the regular post-placement physical synthesis operation on the design. That phase will automatically write a replayable-file which will contain information about all the optimization actions. That file also contains all the attempted optimization moves what physical synthesis deemed beneficial from QoR perspective, but was not able to accept due to the physical constraint. In the second iteration of the design flow, the designer will perform all those physical synthesis optimizations by importing the replayable file in the pre-placement stage. In addition to performing the physical synthesis flow's changes, it also performs the optimizations that were not possible in the traditional physical synthesis flow. After these changes are made in the logical stage of the design flow, the crucial placement step can adapt to the optimized/better netlist structure. As a result, this approach will greatly help the users reach their challenging timing closure goal. We have evaluated the effectiveness and performance of our proposed approach on a large set of industrial designs. All these designs were targeted towards the latest Xilinx Ultrascale™ devices. Our experimental data indicates that the proposed approach improves the design performance by 4% to 5%, on an average.","PeriodicalId":272036,"journal":{"name":"Proceedings of the 2016 on International Symposium on Physical Design","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130087978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoqing Xu, Tetsuaki Matsunawa, S. Nojima, C. Kodama, T. Kotani, D. Pan
Sub-Resolution Assist Feature (SRAF) generation is a very important resolution enhancement technique to improve yield in modern semiconductor manufacturing process. Model- based SRAF generation has been widely used to achieve high accuracy but it is known to be time consuming and it is hard to obtain consistent SRAFs on the same layout pattern configurations. This paper proposes the first ma- chine learning based framework for fast yet consistent SRAF generation with high quality of results. Our technical con- tributions include robust feature extraction, novel feature compaction, model training for SRAF classification and pre- diction, and the final SRAF generation with consideration of practical mask manufacturing constraints. Experimental re- sults demonstrate that, compared with commercial Calibre tool, our machine learning based SRAF generation obtains 10X speed up and comparable performance in terms of edge placement error (EPE) and process variation (PV) band.
{"title":"A Machine Learning Based Framework for Sub-Resolution Assist Feature Generation","authors":"Xiaoqing Xu, Tetsuaki Matsunawa, S. Nojima, C. Kodama, T. Kotani, D. Pan","doi":"10.1145/2872334.2872357","DOIUrl":"https://doi.org/10.1145/2872334.2872357","url":null,"abstract":"Sub-Resolution Assist Feature (SRAF) generation is a very important resolution enhancement technique to improve yield in modern semiconductor manufacturing process. Model- based SRAF generation has been widely used to achieve high accuracy but it is known to be time consuming and it is hard to obtain consistent SRAFs on the same layout pattern configurations. This paper proposes the first ma- chine learning based framework for fast yet consistent SRAF generation with high quality of results. Our technical con- tributions include robust feature extraction, novel feature compaction, model training for SRAF classification and pre- diction, and the final SRAF generation with consideration of practical mask manufacturing constraints. Experimental re- sults demonstrate that, compared with commercial Calibre tool, our machine learning based SRAF generation obtains 10X speed up and comparable performance in terms of edge placement error (EPE) and process variation (PV) band.","PeriodicalId":272036,"journal":{"name":"Proceedings of the 2016 on International Symposium on Physical Design","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132144551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While physical design continues to fight the traditional data capacity and runtime challenges, it has also become critically important to overcome many drawbacks of the silicon technology roadmap. At the emerging technology nodes, namely 10, 7 and 5 nanometers, sheer complexity hits unprecedented levels. Integration capacity in terms of number of transistors already exceeds 100 billion of transistors per die, with 1 trillion within our reach. Standard-cell complex abutment and multi-VTH design rules pose new placement challenge. Non-planar transistors get smaller and taller, but contacted metal pitch doesn't scale accordingly, thus making pins accessibility harder and introducing new routing congestion issues. Lithography transition to EUV is still unclear, which translates into triple, quadruple, and even octuple patterning cannot be ruled out. Interconnect RC delay not only has by far the lion's share of total delay, but its variation across the stack has reached over one order of magnitude between the lowest (Mx) and the highest (Mz) layers, while the R contribution of vias increases dramatically. Finally, the modelling, characterization, and computing of near-threshold -- ultra-low voltage -- design effects and their impact on timing and power bring design closure up to a much higher level of complexity. At the established technology nodes, unlike in the past, the oldest nodes are not discontinued. On the contrary, not only the number of active technology nodes in volume production is increasing, but more than 90% of designs in 2016 will be at 45/40 nanometers and above, accounting for more than 60% of wafer production by area. However, today 180 nm designs are radically different from their late 1990s distant relatives. Physical design is increasingly being relied upon to achieve lower area and power, as well as to reduce the required silicon resources in the interest of a better performance and power envelope at a lower cost. Sophisticated physical design methodologies, originally devised for survival at the emerging technology nodes, are more and more frequently used to improve the metrics of the established technology nodes, and to extend their useful lifespan for a very long time. Production volumes dictate which applications rush to the newest emerging technology nodes and which ones continue to hold at the established nodes. However, it is increasingly difficult to integrate digital computing with analog interfaces, to say nothing about sensors and actuators, energy harvesting or silicon photonics. It is hard to think of digital and true analog & mixed-signal blocks co-existing on the same die at 7 or 5 nanometers. 2.5D-IC and perhaps eventually 3D-IC integration will be required whenever digital computing won't be sufficient. For all these reasons, the scope of physical design is expanding. On the one hand, all the diverse requirements of a broadening set of technology nodes have to be taken into consideration because our industry can
{"title":"Some Observations on the Physical Design of the Next Decade","authors":"A. Domic","doi":"10.1145/2872334.2878630","DOIUrl":"https://doi.org/10.1145/2872334.2878630","url":null,"abstract":"While physical design continues to fight the traditional data capacity and runtime challenges, it has also become critically important to overcome many drawbacks of the silicon technology roadmap. At the emerging technology nodes, namely 10, 7 and 5 nanometers, sheer complexity hits unprecedented levels. Integration capacity in terms of number of transistors already exceeds 100 billion of transistors per die, with 1 trillion within our reach. Standard-cell complex abutment and multi-VTH design rules pose new placement challenge. Non-planar transistors get smaller and taller, but contacted metal pitch doesn't scale accordingly, thus making pins accessibility harder and introducing new routing congestion issues. Lithography transition to EUV is still unclear, which translates into triple, quadruple, and even octuple patterning cannot be ruled out. Interconnect RC delay not only has by far the lion's share of total delay, but its variation across the stack has reached over one order of magnitude between the lowest (Mx) and the highest (Mz) layers, while the R contribution of vias increases dramatically. Finally, the modelling, characterization, and computing of near-threshold -- ultra-low voltage -- design effects and their impact on timing and power bring design closure up to a much higher level of complexity. At the established technology nodes, unlike in the past, the oldest nodes are not discontinued. On the contrary, not only the number of active technology nodes in volume production is increasing, but more than 90% of designs in 2016 will be at 45/40 nanometers and above, accounting for more than 60% of wafer production by area. However, today 180 nm designs are radically different from their late 1990s distant relatives. Physical design is increasingly being relied upon to achieve lower area and power, as well as to reduce the required silicon resources in the interest of a better performance and power envelope at a lower cost. Sophisticated physical design methodologies, originally devised for survival at the emerging technology nodes, are more and more frequently used to improve the metrics of the established technology nodes, and to extend their useful lifespan for a very long time. Production volumes dictate which applications rush to the newest emerging technology nodes and which ones continue to hold at the established nodes. However, it is increasingly difficult to integrate digital computing with analog interfaces, to say nothing about sensors and actuators, energy harvesting or silicon photonics. It is hard to think of digital and true analog & mixed-signal blocks co-existing on the same die at 7 or 5 nanometers. 2.5D-IC and perhaps eventually 3D-IC integration will be required whenever digital computing won't be sufficient. For all these reasons, the scope of physical design is expanding. On the one hand, all the diverse requirements of a broadening set of technology nodes have to be taken into consideration because our industry can","PeriodicalId":272036,"journal":{"name":"Proceedings of the 2016 on International Symposium on Physical Design","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121509128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the continuous scaling of integration density and the increasing diversity of customized designs, there are increasing demands on the scalability and the customization of EDA tools and flows. Commercial EDA tools usually provide an interface of TCL scripting to extract and modify the design information for a flexible design flow. However, we observe that the current TCL scripting is not designed for the complete netlist extraction, resulting in a significant degradation in performance. For example, it takes over 20 minutes to extract the complete netlist of a 466K-cell design using TCL. This extraction may be repeated several times when interfacing between the existing EDA platforms and the actual distributed EDA algorithms. This drastic decrease in efficiency is a great barrier for customized EDA tool development. In this paper, we propose to build a distributed framework on top of TCL to accelerate the netlist extraction and use the distribution detailed placement as an example to demonstrate its capability. This framework is promising in scaling out physical design algorithms to run on a cluster.
{"title":"Scaling Up Physical Design: Challenges and Opportunities","authors":"Guojie Luo, Wentai Zhang, Jiaxi Zhang, J. Cong","doi":"10.1145/2872334.2872342","DOIUrl":"https://doi.org/10.1145/2872334.2872342","url":null,"abstract":"Due to the continuous scaling of integration density and the increasing diversity of customized designs, there are increasing demands on the scalability and the customization of EDA tools and flows. Commercial EDA tools usually provide an interface of TCL scripting to extract and modify the design information for a flexible design flow. However, we observe that the current TCL scripting is not designed for the complete netlist extraction, resulting in a significant degradation in performance. For example, it takes over 20 minutes to extract the complete netlist of a 466K-cell design using TCL. This extraction may be repeated several times when interfacing between the existing EDA platforms and the actual distributed EDA algorithms. This drastic decrease in efficiency is a great barrier for customized EDA tool development. In this paper, we propose to build a distributed framework on top of TCL to accelerate the netlist extraction and use the distribution detailed placement as an example to demonstrate its capability. This framework is promising in scaling out physical design algorithms to run on a cluster.","PeriodicalId":272036,"journal":{"name":"Proceedings of the 2016 on International Symposium on Physical Design","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114827203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Optical Networks-on-Chip (ONoCs) are a promising technology to further increase the bandwidth and decrease the power consumption of today's multicore systems. To determine the laser power consumption of an ONoC, the physical design of the system is indispensible. The only place and route tool for 3D ONoCs already proposed in the literature badly scales with the increasing number of optical devices. Thus, within this contribution we present the first force-directed placement algorithm for 3D optical NoCs. Our algorithm decreases the runtime up to 99.7% compared to the state-of-the-art placer. Using our algorithm large topologies can be placed within a short runtime.
{"title":"PLATON","authors":"Anja von Beuningen, Ulf Schlichtmann","doi":"10.1145/2872334.2872356","DOIUrl":"https://doi.org/10.1145/2872334.2872356","url":null,"abstract":"Optical Networks-on-Chip (ONoCs) are a promising technology to further increase the bandwidth and decrease the power consumption of today's multicore systems. To determine the laser power consumption of an ONoC, the physical design of the system is indispensible. The only place and route tool for 3D ONoCs already proposed in the literature badly scales with the increasing number of optical devices. Thus, within this contribution we present the first force-directed placement algorithm for 3D optical NoCs. Our algorithm decreases the runtime up to 99.7% compared to the state-of-the-art placer. Using our algorithm large topologies can be placed within a short runtime.","PeriodicalId":272036,"journal":{"name":"Proceedings of the 2016 on International Symposium on Physical Design","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116967169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As technology nodes advance and designs become more complex, EDA developers strive to provide new and better solutions to the traditional problems encountered in back-end implementation. The usefulness of these EDA solutions is dependent on acceptance by designers, who often have different goals than developers. This presentation will offer a designer's perspective on current EDA solutions, with a focus on timing closure. Topics will include automated floorplanning goals, clock tree structure tradeoffs, data net repowering challenges and hold padding strategies. Examples from design experience will be used to illustrate where current EDA solutions work well, and what the obstacles are in cases where they do not.
{"title":"A Designer's Perspective on Timing Closure","authors":"G. Ford","doi":"10.1145/2872334.2872339","DOIUrl":"https://doi.org/10.1145/2872334.2872339","url":null,"abstract":"As technology nodes advance and designs become more complex, EDA developers strive to provide new and better solutions to the traditional problems encountered in back-end implementation. The usefulness of these EDA solutions is dependent on acceptance by designers, who often have different goals than developers. This presentation will offer a designer's perspective on current EDA solutions, with a focus on timing closure. Topics will include automated floorplanning goals, clock tree structure tradeoffs, data net repowering challenges and hold padding strategies. Examples from design experience will be used to illustrate where current EDA solutions work well, and what the obstacles are in cases where they do not.","PeriodicalId":272036,"journal":{"name":"Proceedings of the 2016 on International Symposium on Physical Design","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115698156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper is a concise survey as well as an exposition of ideas about automation of layout design. Central is a discussion of imperatives of a layout design system suitable for VLSI. Of course, such a system has to take account of the embedding into an integrated design system. However, layout design faces two major problems. One results from industry's ability to pack over 10000 gate equivalents into a single chip. Beside this increase in complexity today's micro-electronics technology made a variety of processes - each with its own set of design rules - available for integration. This diversity has been existing for a long time, but complexity raised the problem, since development of efficient systems for designing complex systems is costly and time-consuming.
{"title":"Complexity and Diversity in IC Layout Design","authors":"R. Otten","doi":"10.1145/2872334.2872348","DOIUrl":"https://doi.org/10.1145/2872334.2872348","url":null,"abstract":"The paper is a concise survey as well as an exposition of ideas about automation of layout design. Central is a discussion of imperatives of a layout design system suitable for VLSI. Of course, such a system has to take account of the embedding into an integrated design system. However, layout design faces two major problems. One results from industry's ability to pack over 10000 gate equivalents into a single chip. Beside this increase in complexity today's micro-electronics technology made a variety of processes - each with its own set of design rules - available for integration. This diversity has been existing for a long time, but complexity raised the problem, since development of efficient systems for designing complex systems is costly and time-consuming.","PeriodicalId":272036,"journal":{"name":"Proceedings of the 2016 on International Symposium on Physical Design","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131989330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}