Building up End-to-end Mask Optimization Framework with Self-training

Bentian Jiang, Xiaopeng Zhang, Lixin Liu, Evangeline F. Y. Young
{"title":"Building up End-to-end Mask Optimization Framework with Self-training","authors":"Bentian Jiang, Xiaopeng Zhang, Lixin Liu, Evangeline F. Y. Young","doi":"10.1145/3439706.3447050","DOIUrl":null,"url":null,"abstract":"With the continuous shrinkage of device technology node, the tremendously increasing demands for resolution enhancement technologies (RETs) have created severe concerns over the balance between computational affordability and model accuracy. Having realized the analogies between computational lithography tasks and deep learning-based computer vision applications (e.g., medical image analysis), both industry and academia start gradually migrating various RETs to deep learning-enabled platforms. In this paper, we propose a unified self-training paradigm for building up an end-to-end mask optimization framework from undisclosable layout patterns. Our proposed flow comprises (1) a learning-based pattern generation stage to massively synthesize diverse and realistic layout patterns following the distribution of the undisclosable target layouts, while keeping these confidential layouts blind for any successive training stage, and (2) a complete self-training stage for building up an end-to-end on-neural-network mask optimization framework from scratch, which only requires the aforementioned generated patterns and a compact lithography simulation model as the inputs. Quantitative results demonstrate that our proposed flow achieves comparable state-of-the-art (SOTA) performance in terms of both mask printability and mask correction time while reducing 66% of the turn around time for flow construction.","PeriodicalId":184050,"journal":{"name":"Proceedings of the 2021 International Symposium on Physical Design","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 International Symposium on Physical Design","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3439706.3447050","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

With the continuous shrinkage of device technology node, the tremendously increasing demands for resolution enhancement technologies (RETs) have created severe concerns over the balance between computational affordability and model accuracy. Having realized the analogies between computational lithography tasks and deep learning-based computer vision applications (e.g., medical image analysis), both industry and academia start gradually migrating various RETs to deep learning-enabled platforms. In this paper, we propose a unified self-training paradigm for building up an end-to-end mask optimization framework from undisclosable layout patterns. Our proposed flow comprises (1) a learning-based pattern generation stage to massively synthesize diverse and realistic layout patterns following the distribution of the undisclosable target layouts, while keeping these confidential layouts blind for any successive training stage, and (2) a complete self-training stage for building up an end-to-end on-neural-network mask optimization framework from scratch, which only requires the aforementioned generated patterns and a compact lithography simulation model as the inputs. Quantitative results demonstrate that our proposed flow achieves comparable state-of-the-art (SOTA) performance in terms of both mask printability and mask correction time while reducing 66% of the turn around time for flow construction.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于自训练的端到端掩模优化框架构建
随着设备技术节点的不断缩小,对分辨率增强技术(RETs)的需求急剧增加,对计算负担能力和模型精度之间的平衡产生了严重的担忧。在意识到计算光刻任务与基于深度学习的计算机视觉应用(例如医学图像分析)之间的相似性之后,工业界和学术界都开始逐步将各种ret迁移到支持深度学习的平台上。在本文中,我们提出了一个统一的自我训练范式,用于从不可公开的布局模式构建端到端掩模优化框架。我们提出的流程包括(1)基于学习的模式生成阶段,根据不可披露的目标布局的分布大规模合成各种真实的布局模式,同时对任何后续训练阶段保持这些机密布局的盲性;(2)一个完整的自我训练阶段,从头开始构建端到端神经网络掩码优化框架。它只需要上述生成的图案和一个紧凑的光刻仿真模型作为输入。定量结果表明,我们提出的流程在掩模可打印性和掩模校正时间方面达到了相当的最先进(SOTA)性能,同时减少了66%的流程构建周转时间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Reinforcement Learning for Placement Optimization Session details: Session 8: Monolithic 3D and Packaging Session ISPD 2021 Wafer-Scale Physics Modeling Contest: A New Frontier for Partitioning, Placement and Routing Scalable System and Silicon Architectures to Handle the Workloads of the Post-Moore Era A Lifetime of ICs, and Cross-field Exploration: ISPD 2021 Lifetime Achievement Award Bio
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1