首页 > 最新文献

2023 International Conference on Electronics, Information, and Communication (ICEIC)最新文献

英文 中文
Audio-to-Facial Landmarks Generator for Talking Face Video Synthesis 音频到面部地标生成器说话的脸视频合成
Pub Date : 2023-02-05 DOI: 10.1109/ICEIC57457.2023.10049847
Dasol Jeong, Injae Lee, J. Paik
Audio driven talking face methods have been studied to process the accuracy lip synchronization. However, how to create movement of head poses and personalized facial features is a challenging problem. In order to solve this problem, it is necessary to identify the context based on the audio, create the head pose and lip motion, and synthesize the personalized face. We introduce a facial landmark generation method including audio-based head pose and lip motion using an audio transformer. The audio transformer extracts audio features containing contextual information and creates generalized head pose and lip motion landmarks. In order to synthesize personalized features on the generated landmarks, a talking face video is generated by applying the method learned through meta-learning. With just a few single images, even unknown faces can be spoken in the audio you want. In addition, the proposed method is applicable to various languages, and enables photo-realistic synthesis and fast inference.
研究了音频驱动的说话脸方法来实现准确的唇同步。然而,如何创造头部姿势的运动和个性化的面部特征是一个具有挑战性的问题。为了解决这个问题,有必要根据音频识别上下文,创建头部姿势和嘴唇运动,并合成个性化的脸部。我们介绍了一种基于音频的面部标志生成方法,包括基于音频的头部姿势和嘴唇运动。音频转换器提取包含上下文信息的音频特征,并创建通用的头部姿势和嘴唇运动地标。为了在生成的地标上综合个性化的特征,应用元学习的方法生成了一个会说话的人脸视频。只需几张单独的图像,即使是不知名的面孔也可以在你想要的音频中说话。此外,该方法适用于多种语言,能够实现逼真的合成和快速推理。
{"title":"Audio-to-Facial Landmarks Generator for Talking Face Video Synthesis","authors":"Dasol Jeong, Injae Lee, J. Paik","doi":"10.1109/ICEIC57457.2023.10049847","DOIUrl":"https://doi.org/10.1109/ICEIC57457.2023.10049847","url":null,"abstract":"Audio driven talking face methods have been studied to process the accuracy lip synchronization. However, how to create movement of head poses and personalized facial features is a challenging problem. In order to solve this problem, it is necessary to identify the context based on the audio, create the head pose and lip motion, and synthesize the personalized face. We introduce a facial landmark generation method including audio-based head pose and lip motion using an audio transformer. The audio transformer extracts audio features containing contextual information and creates generalized head pose and lip motion landmarks. In order to synthesize personalized features on the generated landmarks, a talking face video is generated by applying the method learned through meta-learning. With just a few single images, even unknown faces can be spoken in the audio you want. In addition, the proposed method is applicable to various languages, and enables photo-realistic synthesis and fast inference.","PeriodicalId":373752,"journal":{"name":"2023 International Conference on Electronics, Information, and Communication (ICEIC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116965539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Verilator-based Fast Verification Methodology for BLE MAC Hardware 基于verilator的BLE MAC硬件快速验证方法
Pub Date : 2023-02-05 DOI: 10.1109/ICEIC57457.2023.10049940
Eun-Gyeong Ham, Yujin Jeon, Jaeyun Lim, Ji-Hoon Kim
Following the market trend, fast and strict verification of hardware architecture is essential for saving cost and time of production. Recently, Verilator, an open-source Verilog simulator, is widely used due to its fast simulation time as well as easy-to-use property. In this paper, we present a fast verification methodology for BLE (Bluetooth Low Energy) MAC (Medium Access Control) hardware based on Verilator. Since C++-based modeling can be easily integrated into the Verilog-based hardware platform with Verilator, complex verification scenarios with various parameter configurations can be supported. Compared to the commercial Verilog simulator, our verification platform shows up to 5.94 times of improvement in terms of simulation time.
随着市场的发展趋势,快速严格的硬件架构验证对于节省成本和生产时间至关重要。最近,Verilator,一个开源的Verilog模拟器,由于其快速的仿真时间和易于使用的特性而被广泛使用。本文提出了一种基于Verilator的低功耗蓝牙(Bluetooth Low Energy) MAC (Medium Access Control)硬件的快速验证方法。由于基于c++的建模可以通过Verilator轻松地集成到基于verilog的硬件平台中,因此可以支持具有各种参数配置的复杂验证场景。与商用Verilog模拟器相比,我们的验证平台在仿真时间方面提高了5.94倍。
{"title":"Verilator-based Fast Verification Methodology for BLE MAC Hardware","authors":"Eun-Gyeong Ham, Yujin Jeon, Jaeyun Lim, Ji-Hoon Kim","doi":"10.1109/ICEIC57457.2023.10049940","DOIUrl":"https://doi.org/10.1109/ICEIC57457.2023.10049940","url":null,"abstract":"Following the market trend, fast and strict verification of hardware architecture is essential for saving cost and time of production. Recently, Verilator, an open-source Verilog simulator, is widely used due to its fast simulation time as well as easy-to-use property. In this paper, we present a fast verification methodology for BLE (Bluetooth Low Energy) MAC (Medium Access Control) hardware based on Verilator. Since C++-based modeling can be easily integrated into the Verilog-based hardware platform with Verilator, complex verification scenarios with various parameter configurations can be supported. Compared to the commercial Verilog simulator, our verification platform shows up to 5.94 times of improvement in terms of simulation time.","PeriodicalId":373752,"journal":{"name":"2023 International Conference on Electronics, Information, and Communication (ICEIC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125054240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweighted FPGA Implementation of Even-Odd-Buffered Active Noise Canceller with On-Chip Convolution Acceleration Units 带有片上卷积加速单元的奇偶缓冲有源消噪器的轻量级FPGA实现
Pub Date : 2023-02-05 DOI: 10.1109/ICEIC57457.2023.10049949
Seunghyun Park, Daejin Park
To make the acoustic signal that was processed by the noise canceller sound more natural for users [1]–[3], the delay in anti-noise generation should be reduced. For a single buffer, processing delay occurrs because it is impossible to write input signals while the processor is processing the data. when interfering with anti-noise and output signal, this processing delay creates additional buffering overhead to match the phase. The processing delay can be minimized using an Even-/Odd-buffer Structure to alternately read and write operations. In addition, the differences between the two methods of noise cancellation (FFT-based noise cancellation and adaptive algorithm) are compared in terms of output signal quality, processing time, and power consumption. As a result, using an Even-/Odd-buffer, reduced the processing delay of a single buffer. The FFT-based noise canceling method experienced fewer errors than the adaptive noise canceling method.
为了使消噪器处理后的声信号对用户来说听起来更加自然[1]-[3],需要减小抗噪产生的延迟。对于单个缓冲区,处理延迟的发生是因为处理器在处理数据时不可能写入输入信号。当干扰抗噪声和输出信号时,这个处理延迟会产生额外的缓冲开销来匹配相位。使用偶/奇缓冲结构交替进行读写操作可以最小化处理延迟。此外,比较了两种降噪方法(基于fft的降噪和自适应算法)在输出信号质量、处理时间和功耗方面的差异。因此,使用偶数/奇数缓冲区可以减少单个缓冲区的处理延迟。基于fft的消噪方法比自适应消噪方法误差更小。
{"title":"Lightweighted FPGA Implementation of Even-Odd-Buffered Active Noise Canceller with On-Chip Convolution Acceleration Units","authors":"Seunghyun Park, Daejin Park","doi":"10.1109/ICEIC57457.2023.10049949","DOIUrl":"https://doi.org/10.1109/ICEIC57457.2023.10049949","url":null,"abstract":"To make the acoustic signal that was processed by the noise canceller sound more natural for users [1]–[3], the delay in anti-noise generation should be reduced. For a single buffer, processing delay occurrs because it is impossible to write input signals while the processor is processing the data. when interfering with anti-noise and output signal, this processing delay creates additional buffering overhead to match the phase. The processing delay can be minimized using an Even-/Odd-buffer Structure to alternately read and write operations. In addition, the differences between the two methods of noise cancellation (FFT-based noise cancellation and adaptive algorithm) are compared in terms of output signal quality, processing time, and power consumption. As a result, using an Even-/Odd-buffer, reduced the processing delay of a single buffer. The FFT-based noise canceling method experienced fewer errors than the adaptive noise canceling method.","PeriodicalId":373752,"journal":{"name":"2023 International Conference on Electronics, Information, and Communication (ICEIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125890637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
AI Feedback Architecture of Video Surveillance System 视频监控系统的AI反馈架构
Pub Date : 2023-02-05 DOI: 10.1109/ICEIC57457.2023.10049874
Taewan Kim
The learning capacity of general deep learning models for object detection would not be large enough to represent real-world scene dynamics, and thus such models would be weak to ‘unseen’ data due to environmental changes. Therefore, in this study, we propose a new method to continuously improve the object detection algorithms by applying negative and positive learning mechanisms, especially for intrusion detector in video surveillance systems. By applying an iterative process where the current model is updating using new incoming data with a state-of-the-art model in a continual process of adaptation. The experimental results in various challenging videos for real video surveillance systems demonstrate that the proposed method offers a significantly improved algorithm accuracy with a low complexity, thus it is adapted for real-world systems.
用于对象检测的一般深度学习模型的学习能力不足以表示现实世界的场景动态,因此,由于环境变化,这些模型对“看不见的”数据很弱。因此,在本研究中,我们提出了一种新的方法,通过应用负学习和正学习机制来不断改进目标检测算法,特别是针对视频监控系统中的入侵检测器。通过应用一个迭代过程,在这个过程中,当前模型正在使用新传入的数据与最先进的模型在一个持续的适应过程中进行更新。在实际视频监控系统中各种具有挑战性的视频中的实验结果表明,该方法在较低的复杂度下显著提高了算法精度,因此适合于实际系统。
{"title":"AI Feedback Architecture of Video Surveillance System","authors":"Taewan Kim","doi":"10.1109/ICEIC57457.2023.10049874","DOIUrl":"https://doi.org/10.1109/ICEIC57457.2023.10049874","url":null,"abstract":"The learning capacity of general deep learning models for object detection would not be large enough to represent real-world scene dynamics, and thus such models would be weak to ‘unseen’ data due to environmental changes. Therefore, in this study, we propose a new method to continuously improve the object detection algorithms by applying negative and positive learning mechanisms, especially for intrusion detector in video surveillance systems. By applying an iterative process where the current model is updating using new incoming data with a state-of-the-art model in a continual process of adaptation. The experimental results in various challenging videos for real video surveillance systems demonstrate that the proposed method offers a significantly improved algorithm accuracy with a low complexity, thus it is adapted for real-world systems.","PeriodicalId":373752,"journal":{"name":"2023 International Conference on Electronics, Information, and Communication (ICEIC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126646511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A segment-wise extraction of multivariate time-series features for Grassmann clustering 面向Grassmann聚类的多变量时间序列特征分段提取
Pub Date : 2023-02-05 DOI: 10.1109/ICEIC57457.2023.10049970
Sebin Heo, Bezawit Habtamu Nuriye, Beomseok Oh
In this paper, a novel approach of extracting features from multivariate time-series (MTS) with different time lengths, is proposed to enhance the clustering accuracy. Particularly, the feature extraction is conducted on time-sample segments of MTS, in which several segments are defined without overlapping. As for feature extractor, the conventional two-dimensional principal component analysis (2DPCA) is deployed due to its proven effectiveness in feature representation. Our experimental results show that the proposed segment-wise extraction of 2DPCA features is helpful in enhancing the clustering accuracy.
为了提高聚类精度,提出了一种从不同时间长度的多变量时间序列中提取特征的新方法。特别地,对MTS的时间样本片段进行特征提取,其中定义了多个不重叠的片段。在特征提取方面,由于传统的二维主成分分析(2DPCA)在特征表示方面的有效性得到了验证,因此采用了传统的二维主成分分析。实验结果表明,本文提出的分段提取2DPCA特征有助于提高聚类精度。
{"title":"A segment-wise extraction of multivariate time-series features for Grassmann clustering","authors":"Sebin Heo, Bezawit Habtamu Nuriye, Beomseok Oh","doi":"10.1109/ICEIC57457.2023.10049970","DOIUrl":"https://doi.org/10.1109/ICEIC57457.2023.10049970","url":null,"abstract":"In this paper, a novel approach of extracting features from multivariate time-series (MTS) with different time lengths, is proposed to enhance the clustering accuracy. Particularly, the feature extraction is conducted on time-sample segments of MTS, in which several segments are defined without overlapping. As for feature extractor, the conventional two-dimensional principal component analysis (2DPCA) is deployed due to its proven effectiveness in feature representation. Our experimental results show that the proposed segment-wise extraction of 2DPCA features is helpful in enhancing the clustering accuracy.","PeriodicalId":373752,"journal":{"name":"2023 International Conference on Electronics, Information, and Communication (ICEIC)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122769739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wireless event-based kill-switch for safe and autonomous UAS operation 基于事件的无线终止开关,用于安全自主的无人机操作
Pub Date : 2023-02-05 DOI: 10.1109/ICEIC57457.2023.10049917
Jahir Uddin, Muntasir Ahad, Abdulla Hil Kafi
UASs are used for a variety of sophisticated missions. It is critical to protect people and property from any unwanted and uncontrolled UAS behavior. The goal of this research is to offer an intelligent, event-based kill-switch with developed secure authorization algorithm. It has a fully manual mode of operation and is capable of autonomous decision-making. This is a multipurpose kill-switch to terminate sixteen flying drones at a time and wake them up all at once or individually.
无人潜航器用于各种复杂的任务。保护人员和财产免受任何不受欢迎和不受控制的无人机行为的影响至关重要。本研究的目标是提供一种基于事件的智能终止开关,并使用开发的安全授权算法。全手动操作,具有自主决策能力。这是一种多用途杀伤开关,可以同时终止16架飞行中的无人机,并同时或单独唤醒它们。
{"title":"Wireless event-based kill-switch for safe and autonomous UAS operation","authors":"Jahir Uddin, Muntasir Ahad, Abdulla Hil Kafi","doi":"10.1109/ICEIC57457.2023.10049917","DOIUrl":"https://doi.org/10.1109/ICEIC57457.2023.10049917","url":null,"abstract":"UASs are used for a variety of sophisticated missions. It is critical to protect people and property from any unwanted and uncontrolled UAS behavior. The goal of this research is to offer an intelligent, event-based kill-switch with developed secure authorization algorithm. It has a fully manual mode of operation and is capable of autonomous decision-making. This is a multipurpose kill-switch to terminate sixteen flying drones at a time and wake them up all at once or individually.","PeriodicalId":373752,"journal":{"name":"2023 International Conference on Electronics, Information, and Communication (ICEIC)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128452040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Classification of the Type of Brain Tumor in MRI Using Xception Model 应用异常模型对脑肿瘤的MRI分型
Pub Date : 2023-02-05 DOI: 10.1109/ICEIC57457.2023.10049979
Ramil Cobilla, Jhon Carlo Dichoso, Al. Minon, April Kate Pascual, Mideth B. Abisado, Shekinah Lor B. Huyo-a, G. Sampedro
A brain tumor is recognized as one of the most invasive things to operate on. Cancer develops inside the brain due to unregulated and aberrant cell partitioning. The recent breakthroughs in deep learning greatly aided the medical imaging sector in diagnosing numerous diseases. In MR images, visual learning and image recognition have been used to classify the type of brain tumor. The researchers utilized a Convolutional Neural Network (CNN) approach, Data Augmentation, and Image Processing to organize brain MRI scans as cancerous or non-cancerous. Using the transfer learning method, the researchers compared the performance of the primary CNN model to that of pre-trained CNN and Xception models. However, the experiment was conducted on a limited dataset. Later results reveal that the model’s accuracy result is very effective and has a meager complexity rate, attaining 96% accuracy on the Xception Model.
脑肿瘤被认为是最具侵入性的手术之一。癌症在大脑内部发展是由于不受控制和异常的细胞分裂。最近深度学习的突破极大地帮助了医学成像部门诊断许多疾病。在磁共振图像中,视觉学习和图像识别已被用于脑肿瘤的类型分类。研究人员利用卷积神经网络(CNN)方法、数据增强和图像处理来组织大脑MRI扫描,以区分癌变或非癌变。使用迁移学习方法,研究人员将初级CNN模型的性能与预训练的CNN和例外模型的性能进行了比较。然而,实验是在有限的数据集上进行的。后来的结果表明,该模型的准确率结果是非常有效的,并且复杂度率很低,在异常模型上达到了96%的准确率。
{"title":"Classification of the Type of Brain Tumor in MRI Using Xception Model","authors":"Ramil Cobilla, Jhon Carlo Dichoso, Al. Minon, April Kate Pascual, Mideth B. Abisado, Shekinah Lor B. Huyo-a, G. Sampedro","doi":"10.1109/ICEIC57457.2023.10049979","DOIUrl":"https://doi.org/10.1109/ICEIC57457.2023.10049979","url":null,"abstract":"A brain tumor is recognized as one of the most invasive things to operate on. Cancer develops inside the brain due to unregulated and aberrant cell partitioning. The recent breakthroughs in deep learning greatly aided the medical imaging sector in diagnosing numerous diseases. In MR images, visual learning and image recognition have been used to classify the type of brain tumor. The researchers utilized a Convolutional Neural Network (CNN) approach, Data Augmentation, and Image Processing to organize brain MRI scans as cancerous or non-cancerous. Using the transfer learning method, the researchers compared the performance of the primary CNN model to that of pre-trained CNN and Xception models. However, the experiment was conducted on a limited dataset. Later results reveal that the model’s accuracy result is very effective and has a meager complexity rate, attaining 96% accuracy on the Xception Model.","PeriodicalId":373752,"journal":{"name":"2023 International Conference on Electronics, Information, and Communication (ICEIC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130418446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2.5D Large-Scale Interposer Bonding Process Verification using Daisy-Chain for PIM Heterogeneous Integration Platform 基于雏菊链的PIM异构集成平台2.5D大规模中间商键合过程验证
Pub Date : 2023-02-05 DOI: 10.1109/ICEIC57457.2023.10049851
Sujin Park, Yi-Gyeong Kim, Young-Deuk Jeon, Min-Hyung Cho, Jinho Han, Youngsu Kwon
This paper describes a 2.5D large-scale interposer bonding process verification method using a daisy chain (DC) for PIM heterogeneous integration platform. The target platform is composed of 8 high bandwidth memories (HBMs), 2 NPUs, an RDL interposer, and a substrate. To check the feasibility of the bonding process, NPU DC die having 48μm×55μm grid μ-bump is designed with a ring-shaped DC at the edge and corner. In addition, the DC pattern for HBM PHY and the die-to-die connection is implemented line-by-line to check the high-density wire connection area which has an offset to minimize the diagonal connection. For the interposer having 192μm×220μm gird C4-bump and the substrate having 1mm×1mm BGA ball, not only IN/OUT pins but also DCs are configured. The large-scale netlists are implemented by unifying the viewpoint to the bottom view. The bonding process verification could improve the cost efficiency of the 2.5D platform and the performance of the live interposer by optimizing the placement.
针对PIM异构集成平台,提出了一种基于菊花链(DC)的2.5D大规模中间体连接过程验证方法。目标平台由8个高带宽存储器(HBMs)、2个npu、一个RDL中间层和一个衬底组成。为了验证键合工艺的可行性,设计了具有48μm×55μm网格μ凸点的NPU直流芯片,并在边缘和角处设计了环形直流。此外,HBM PHY的直流模式和模对模连接逐行实现,以检查高密度电线连接区域,该区域具有偏移量以最小化对角连接。对于具有192μm×220μm网格C4-bump和具有1mm×1mm BGA球的衬底,不仅配置IN/OUT引脚,还配置dc。大规模网络列表是通过将视点统一到底视图来实现的。结合工艺验证可以通过优化放置来提高2.5D平台的成本效率和live interposer的性能。
{"title":"2.5D Large-Scale Interposer Bonding Process Verification using Daisy-Chain for PIM Heterogeneous Integration Platform","authors":"Sujin Park, Yi-Gyeong Kim, Young-Deuk Jeon, Min-Hyung Cho, Jinho Han, Youngsu Kwon","doi":"10.1109/ICEIC57457.2023.10049851","DOIUrl":"https://doi.org/10.1109/ICEIC57457.2023.10049851","url":null,"abstract":"This paper describes a 2.5D large-scale interposer bonding process verification method using a daisy chain (DC) for PIM heterogeneous integration platform. The target platform is composed of 8 high bandwidth memories (HBMs), 2 NPUs, an RDL interposer, and a substrate. To check the feasibility of the bonding process, NPU DC die having 48μm×55μm grid μ-bump is designed with a ring-shaped DC at the edge and corner. In addition, the DC pattern for HBM PHY and the die-to-die connection is implemented line-by-line to check the high-density wire connection area which has an offset to minimize the diagonal connection. For the interposer having 192μm×220μm gird C4-bump and the substrate having 1mm×1mm BGA ball, not only IN/OUT pins but also DCs are configured. The large-scale netlists are implemented by unifying the viewpoint to the bottom view. The bonding process verification could improve the cost efficiency of the 2.5D platform and the performance of the live interposer by optimizing the placement.","PeriodicalId":373752,"journal":{"name":"2023 International Conference on Electronics, Information, and Communication (ICEIC)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130602862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CoordViT: A Novel Method of Improve Vision Transformer-Based Speech Emotion Recognition using Coordinate Information Concatenate 坐标信息拼接:一种改进视觉变换语音情感识别的新方法
Pub Date : 2023-02-05 DOI: 10.1109/ICEIC57457.2023.10049941
Jeongho Kim, Seung-Ho Lee
Recently, in speech emotion recognition, a Transformer-based method using spectrogram images instead of sound data showed improved accuracy than Convolutional Neural Networks (CNNs). Vision Transformer (ViT), a Transformer-based method, achieves high classification accuracy by using divided patches from the input image, but has a problem in that pixel position information is not retained due to embedding layers such as linear projection. Therefore, in this paper, we propose a novel method of improve ViT-based speech emotion recognition using coordinate information concatenate. Since the proposed method retains pixel position information by concatenating coordinate information to the input image, the accuracy of CREMA-D is greatly improved by 82.96% compared to the state-of-art about CREMA-D. As a result, it proved that the coordinate information concatenate proposed in this paper is effective not only for CNNs but also for Transformers.
最近,在语音情感识别中,一种基于transformer的方法使用频谱图图像代替声音数据,其准确性比卷积神经网络(cnn)有所提高。Vision Transformer (ViT)是一种基于Transformer的分类方法,通过对输入图像进行分割,获得了较高的分类精度,但由于线性投影等嵌入层的存在,导致像素位置信息无法保留。因此,本文提出了一种基于坐标信息拼接的语音情感识别方法。由于该方法通过将坐标信息与输入图像拼接,保留了像素位置信息,因此与目前的CREMA-D方法相比,准确率提高了82.96%。结果表明,本文提出的坐标信息拼接方法不仅对cnn有效,对变压器也有效。
{"title":"CoordViT: A Novel Method of Improve Vision Transformer-Based Speech Emotion Recognition using Coordinate Information Concatenate","authors":"Jeongho Kim, Seung-Ho Lee","doi":"10.1109/ICEIC57457.2023.10049941","DOIUrl":"https://doi.org/10.1109/ICEIC57457.2023.10049941","url":null,"abstract":"Recently, in speech emotion recognition, a Transformer-based method using spectrogram images instead of sound data showed improved accuracy than Convolutional Neural Networks (CNNs). Vision Transformer (ViT), a Transformer-based method, achieves high classification accuracy by using divided patches from the input image, but has a problem in that pixel position information is not retained due to embedding layers such as linear projection. Therefore, in this paper, we propose a novel method of improve ViT-based speech emotion recognition using coordinate information concatenate. Since the proposed method retains pixel position information by concatenating coordinate information to the input image, the accuracy of CREMA-D is greatly improved by 82.96% compared to the state-of-art about CREMA-D. As a result, it proved that the coordinate information concatenate proposed in this paper is effective not only for CNNs but also for Transformers.","PeriodicalId":373752,"journal":{"name":"2023 International Conference on Electronics, Information, and Communication (ICEIC)","volume":"53 26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123307408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
High Performance 3.3KV 4H-SiC MOSFET with a Floating Island and Hetero Junction Diode 具有浮岛和异质结二极管的高性能3.3KV 4H-SiC MOSFET
Pub Date : 2023-02-05 DOI: 10.1109/ICEIC57457.2023.10049864
Jaeyeop Na, Kwan-Su Kim
In this paper, a 3.3 kV 4H-SiC MOSFET structure with a floating island and a built-in heterojunction diode (FIHJD-MOSFET) is proposed, and analyzed by TCAD simulator. The floating island in the FIHJD-MOSFET not only improves the static performance of the device through charge balancing but also protects the P+ polysilicon region from a high electric field. Owing to this, the FIHJD-MOSFET operates stably even at high voltage, and the reverse recovery charge and the switching loss are also improved through the built-in heterojunction diode. As a result, B-FOM of FIHJD-MOSFET improved by 67.4 %, and reverse recovery charge and total switching loss were improved by 72.7 % and 66.4 % respectively, compared to conventional diffusion MOSFET (C-DMOSFET).
本文提出了一种3.3 kV浮岛内置异质结二极管的4H-SiC MOSFET结构(FIHJD-MOSFET),并通过TCAD模拟器进行了分析。FIHJD-MOSFET中的浮岛不仅通过电荷平衡提高了器件的静态性能,而且还保护了P+多晶硅区域免受高电场的影响。因此,即使在高压下,FIHJD-MOSFET也可以稳定地工作,并且通过内置异质结二极管也可以提高反向恢复电荷和开关损耗。结果表明,与传统扩散MOSFET (C-DMOSFET)相比,FIHJD-MOSFET的B-FOM提高了67.4%,反向恢复电荷和总开关损耗分别提高了72.7%和66.4%。
{"title":"High Performance 3.3KV 4H-SiC MOSFET with a Floating Island and Hetero Junction Diode","authors":"Jaeyeop Na, Kwan-Su Kim","doi":"10.1109/ICEIC57457.2023.10049864","DOIUrl":"https://doi.org/10.1109/ICEIC57457.2023.10049864","url":null,"abstract":"In this paper, a 3.3 kV 4H-SiC MOSFET structure with a floating island and a built-in heterojunction diode (FIHJD-MOSFET) is proposed, and analyzed by TCAD simulator. The floating island in the FIHJD-MOSFET not only improves the static performance of the device through charge balancing but also protects the P+ polysilicon region from a high electric field. Owing to this, the FIHJD-MOSFET operates stably even at high voltage, and the reverse recovery charge and the switching loss are also improved through the built-in heterojunction diode. As a result, B-FOM of FIHJD-MOSFET improved by 67.4 %, and reverse recovery charge and total switching loss were improved by 72.7 % and 66.4 % respectively, compared to conventional diffusion MOSFET (C-DMOSFET).","PeriodicalId":373752,"journal":{"name":"2023 International Conference on Electronics, Information, and Communication (ICEIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114365218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2023 International Conference on Electronics, Information, and Communication (ICEIC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1