Microfluidic Interposer for High Performance Fluidic Chip Cooling

W. Steller, F. Windrich, D. Bremner, S. Robertson, R. Mrossko, J. Keller, T. Brunschwiler, G. Schlottig, H. Oppermann, M. Wolf, K. Lang
{"title":"Microfluidic Interposer for High Performance Fluidic Chip Cooling","authors":"W. Steller, F. Windrich, D. Bremner, S. Robertson, R. Mrossko, J. Keller, T. Brunschwiler, G. Schlottig, H. Oppermann, M. Wolf, K. Lang","doi":"10.1109/ESTC.2018.8546344","DOIUrl":null,"url":null,"abstract":"High operation temperatures are a main impact factor for long-term reliability. An efficient cooling approach is crucial especially for high performance computing processors (HPC). As reference, the “International Technology Roadmap for Semiconductors” (ITRS) predicted a power consumption of about 700W for data center server processors [1]. Different cooling approaches were investigated already [2]. Unfortunately, current solutions are not sufficient to fulfill high thermal HPC specifications. On one hand, the insufficient cooling performance is raising the chip junction temperature over the critical point. On other hand, the high performance requirements (e.g. low latency time, higher bandwidth) force to use 3D-Integration of components, which is additional raising the heat build-up [3, 4, 5]. Therefore, only the direct integration of a cooling approach within the 3D-stack can eliminate the overheating bottleneck at all. The fluidic cooling approach has a high potential to fulfill the requirements for this direct fluidic integration approach [6]. This work shows the integration and realization of microfluidic features (microfluidic channels and fluidic inlets/outlets) into an interposer. Furthermore we present the integration of this fluidic interposer into a System in Package (SiP) in order to realize a dual side chip cooling for a heat dissipation of 672W (168W/cm-2 which correlates with predicted power consumption of data center server processor according ITRS-Roadmap [1].","PeriodicalId":198238,"journal":{"name":"2018 7th Electronic System-Integration Technology Conference (ESTC)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 7th Electronic System-Integration Technology Conference (ESTC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ESTC.2018.8546344","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

High operation temperatures are a main impact factor for long-term reliability. An efficient cooling approach is crucial especially for high performance computing processors (HPC). As reference, the “International Technology Roadmap for Semiconductors” (ITRS) predicted a power consumption of about 700W for data center server processors [1]. Different cooling approaches were investigated already [2]. Unfortunately, current solutions are not sufficient to fulfill high thermal HPC specifications. On one hand, the insufficient cooling performance is raising the chip junction temperature over the critical point. On other hand, the high performance requirements (e.g. low latency time, higher bandwidth) force to use 3D-Integration of components, which is additional raising the heat build-up [3, 4, 5]. Therefore, only the direct integration of a cooling approach within the 3D-stack can eliminate the overheating bottleneck at all. The fluidic cooling approach has a high potential to fulfill the requirements for this direct fluidic integration approach [6]. This work shows the integration and realization of microfluidic features (microfluidic channels and fluidic inlets/outlets) into an interposer. Furthermore we present the integration of this fluidic interposer into a System in Package (SiP) in order to realize a dual side chip cooling for a heat dissipation of 672W (168W/cm-2 which correlates with predicted power consumption of data center server processor according ITRS-Roadmap [1].
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于高性能流控芯片冷却的微流控中间体
高工作温度是影响长期可靠性的主要因素。高效的冷却方法对于高性能计算处理器(HPC)尤为重要。作为参考,“国际半导体技术路线图”(ITRS)预测数据中心服务器处理器的功耗约为700W[1]。已经研究了不同的冷却方法[2]。不幸的是,目前的解决方案不足以满足高热高性能计算规范。一方面,散热性能不足导致芯片结温超过临界点。另一方面,高性能要求(例如低延迟时间,更高带宽)迫使使用组件的3d集成,这额外增加了热量积聚[3,4,5]。因此,只有在3d堆栈中直接集成冷却方法才能彻底消除过热瓶颈。流体冷却方法很有可能满足这种直接流体集成方法的要求[6]。这项工作展示了将微流控特征(微流控通道和流体入口/出口)集成和实现到一个中介器中。此外,我们提出将该流体介面器集成到系统级封装(SiP)中,以实现双侧芯片冷却,散热672W (168W/cm-2),这与ITRS-Roadmap[1]预测的数据中心服务器处理器功耗相关。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Wafer Level Through Polymer Optical Vias (TPOV) Enabling High Throughput of Optical Windows Manufacturing ESTC 2018 TOC Calculation of local solder temperature profiles in reflow ovens Numerical and statistical investigation of weld formation in a novel two-dimensional copper-copper bonding process Nonconchoidal Fracture in Power Electronics Substrates due to Delamination in Baseplate Solder Joints
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1