首页 > 最新文献

ITE Transactions on Media Technology and Applications最新文献

英文 中文
[Foreword] Welcome to the Special Section on Advanced Multimedia Transmission Technology and Its Application 【前言】欢迎来到“先进多媒体传输技术及其应用”专区
IF 1.1 Pub Date : 2020-01-01 DOI: 10.3169/MTA.6.81
H. Murata
{"title":"[Foreword] Welcome to the Special Section on Advanced Multimedia Transmission Technology and Its Application","authors":"H. Murata","doi":"10.3169/MTA.6.81","DOIUrl":"https://doi.org/10.3169/MTA.6.81","url":null,"abstract":"","PeriodicalId":41874,"journal":{"name":"ITE Transactions on Media Technology and Applications","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69649588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
[Paper] Automatic Quality Evaluation of Whole Slide Images for the Practical Use of Whole Slide Imaging Scanner [论文]应用于全片成像扫描仪的全片图像质量自动评价
IF 1.1 Pub Date : 2020-01-01 DOI: 10.3169/mta.8.252
H. Shakhawat, Tomoya Nakamura, Fumikazu Kimura, Y. Yagi, M. Yamaguchi
A whole slide imaging (WSI) scanner scans pathological-specimens to produce digital images for monitor-based diagnosis and analysis. However, the image quality is sometimes insufficient due to focus-error or noise, in which case the slide needs to be rescanned. In previous work, a referenceless quality evaluation technique was proposed, but some artifacts (i.e. tissue-fold, air-bubble) were detected as false positives. Those artifacts need to be ignored in determining whether rescanning is necessary or not, because they are not caused in the scanning but slide preparation stage. This paper proposes a method for a more practical system to assess WSI quality by distinguishing the origins of quality degradation; the focus-error or noise caused by the scanner and the artifact occurred in the slide preparation. In the method, a support vector machine detects artifacts first, and then quality is evaluated excluding artifact regions. The effectiveness of the proposed system has been experimentally demonstrated.
全玻片成像(WSI)扫描仪扫描病理标本,产生数字图像,用于基于监测的诊断和分析。然而,由于聚焦误差或噪声,图像质量有时会不足,在这种情况下,需要重新扫描幻灯片。在之前的工作中,提出了一种无参考的质量评估技术,但检测到一些伪影(如组织褶皱、气泡)为假阳性。在确定是否需要重新扫描时,需要忽略这些伪影,因为它们不是在扫描阶段产生的,而是在幻灯片准备阶段产生的。本文提出了一种通过区分质量退化的来源来评估WSI质量的更实用的系统方法;在玻片制备过程中,由扫描仪和伪影引起的聚焦误差或噪声。在该方法中,首先使用支持向量机检测工件,然后排除工件区域对质量进行评估。实验证明了该系统的有效性。
{"title":"[Paper] Automatic Quality Evaluation of Whole Slide Images for the Practical Use of Whole Slide Imaging Scanner","authors":"H. Shakhawat, Tomoya Nakamura, Fumikazu Kimura, Y. Yagi, M. Yamaguchi","doi":"10.3169/mta.8.252","DOIUrl":"https://doi.org/10.3169/mta.8.252","url":null,"abstract":"A whole slide imaging (WSI) scanner scans pathological-specimens to produce digital images for monitor-based diagnosis and analysis. However, the image quality is sometimes insufficient due to focus-error or noise, in which case the slide needs to be rescanned. In previous work, a referenceless quality evaluation technique was proposed, but some artifacts (i.e. tissue-fold, air-bubble) were detected as false positives. Those artifacts need to be ignored in determining whether rescanning is necessary or not, because they are not caused in the scanning but slide preparation stage. This paper proposes a method for a more practical system to assess WSI quality by distinguishing the origins of quality degradation; the focus-error or noise caused by the scanner and the artifact occurred in the slide preparation. In the method, a support vector machine detects artifacts first, and then quality is evaluated excluding artifact regions. The effectiveness of the proposed system has been experimentally demonstrated.","PeriodicalId":41874,"journal":{"name":"ITE Transactions on Media Technology and Applications","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69651593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
[Paper] Development of Lightweight Compressed 8K UHDTV over IP Transmission Device Realizing Live Remote Production [论文]实现远程直播制作的轻量级压缩8K超高清电视IP传输设备的研制
IF 1.1 Pub Date : 2020-01-01 DOI: 10.3169/mta.8.31
J. Kawamoto, T. Koyama, Masahiro Kawaragi, Kyoichi Saito, T. Kurakake
and stable broadcasting service. As a representative SDI signal, it consists of high definition-SDI (HD-SDI) 12) signals with a transmission speed of 1.5 Gb/s used in the 2K broadcasting service, and in recent years, faster 12G-SDI 13) signals have also been standardized. An HD-SDI can transmit one 2K HDTV Abstract Studies on live program production systems using Internet Protocol (IP) communications technology at broadcast stations are progressing. Remote production is attracting attention as a new style of live program production using IP. In remote production, broadcast stations and venues are connected by IP network, and programs are remotely produced from the broadcast station side. To enable remote production, it is necessary for both the venue and the broadcast station to share, in real-time, high-quality video taken at the venue. It is also required to bidirectionally communicate signals other than video/audio that are necessary for program production, such as control and communication line signals. To realize 8K remote production, we have developed a lightweight compressed 8K over IP transmission device. In this work, we describe its functions and report experimental results on multi-channel audio remote production with 8K video and real-time 8K camera control on a 1000-km IP network.
稳定的广播服务。作为SDI信号的代表,它由2K广播业务中使用的传输速度为1.5 Gb/s的高清SDI (HD-SDI) 12)信号组成,近年来,更快的12G-SDI 13)信号也被标准化。摘要利用互联网协议(IP)通信技术在广播电台进行节目直播制作系统的研究正在进行中。远程制作作为一种利用IP进行直播节目制作的新形式,正受到人们的关注。在远程制作中,广播站和场馆之间通过IP网络连接,由广播站端远程制作节目。为了实现远程制作,场馆和广播站都需要实时共享在场馆拍摄的高质量视频。还需要双向通信节目制作所必需的视频/音频以外的信号,如控制和通信线路信号。为了实现8K远程制作,我们开发了一种轻量级的压缩8K over IP传输设备。在本文中,我们描述了它的功能,并报告了在1000公里的IP网络上进行8K视频和实时8K摄像机控制的多通道音频远程制作的实验结果。
{"title":"[Paper] Development of Lightweight Compressed 8K UHDTV over IP Transmission Device Realizing Live Remote Production","authors":"J. Kawamoto, T. Koyama, Masahiro Kawaragi, Kyoichi Saito, T. Kurakake","doi":"10.3169/mta.8.31","DOIUrl":"https://doi.org/10.3169/mta.8.31","url":null,"abstract":"and stable broadcasting service. As a representative SDI signal, it consists of high definition-SDI (HD-SDI) 12) signals with a transmission speed of 1.5 Gb/s used in the 2K broadcasting service, and in recent years, faster 12G-SDI 13) signals have also been standardized. An HD-SDI can transmit one 2K HDTV Abstract Studies on live program production systems using Internet Protocol (IP) communications technology at broadcast stations are progressing. Remote production is attracting attention as a new style of live program production using IP. In remote production, broadcast stations and venues are connected by IP network, and programs are remotely produced from the broadcast station side. To enable remote production, it is necessary for both the venue and the broadcast station to share, in real-time, high-quality video taken at the venue. It is also required to bidirectionally communicate signals other than video/audio that are necessary for program production, such as control and communication line signals. To realize 8K remote production, we have developed a lightweight compressed 8K over IP transmission device. In this work, we describe its functions and report experimental results on multi-channel audio remote production with 8K video and real-time 8K camera control on a 1000-km IP network.","PeriodicalId":41874,"journal":{"name":"ITE Transactions on Media Technology and Applications","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69651929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
[Paper] Speech-driven Face Reenactment for a Video Sequence [论文]基于语音驱动的视频序列人脸再现
IF 1.1 Pub Date : 2020-01-01 DOI: 10.3169/mta.8.60
Yuta Nakashima, Takaaki Yasui, L. Nguyen, N. Babaguchi
We present a system for reenacting a person’s face driven by speech. Given a video sequence with the corresponding audio track of a person giving a speech and another audio track containing different speech from the same person, we reconstruct a 3D mesh of the face in each frame of the video sequence to match the speech in the second audio track. Audio features are extracted from such two audio tracks. Assuming that the appearance of the mouth is highly correlated to these speech features, we extract the mouth region of the face’s 3D mesh from the video sequence when speech features from the second audio track are close to those of the video’s audio track. While retaining temporal consistency, these extracted mouth regions then replace the original mouth regions in the video sequence, synthesizing a reenactment video where the person seemingly gives the speech from the second audio track. Our system, coined S2TH (speech to talking head), does not require any special hardware to capture the 3D geometry of faces but uses the state-of-the-art method for facial geometry regression. We visually and subjectively demonstrate reenactment quality.
我们提出了一个由语音驱动的再现人脸的系统。给定一个视频序列,其中有一个人发表演讲的相应音轨和另一个包含同一人不同语音的音轨,我们在视频序列的每一帧中重建一个人脸的3D网格,以匹配第二个音轨中的语音。从这两个音轨中提取音频特征。假设嘴巴的外观与这些语音特征高度相关,当第二音轨的语音特征与视频音轨的语音特征接近时,我们从视频序列中提取人脸3D网格的嘴巴区域。在保持时间一致性的同时,这些提取的嘴巴区域取代了视频序列中的原始嘴巴区域,合成了一个再现视频,在视频中,这个人似乎是从第二个音轨中发表讲话的。我们的系统,被称为S2TH(语音到说话头),不需要任何特殊的硬件来捕获面部的3D几何形状,而是使用最先进的面部几何形状回归方法。我们在视觉上和主观上展示再现的质量。
{"title":"[Paper] Speech-driven Face Reenactment for a Video Sequence","authors":"Yuta Nakashima, Takaaki Yasui, L. Nguyen, N. Babaguchi","doi":"10.3169/mta.8.60","DOIUrl":"https://doi.org/10.3169/mta.8.60","url":null,"abstract":"We present a system for reenacting a person’s face driven by speech. Given a video sequence with the corresponding audio track of a person giving a speech and another audio track containing different speech from the same person, we reconstruct a 3D mesh of the face in each frame of the video sequence to match the speech in the second audio track. Audio features are extracted from such two audio tracks. Assuming that the appearance of the mouth is highly correlated to these speech features, we extract the mouth region of the face’s 3D mesh from the video sequence when speech features from the second audio track are close to those of the video’s audio track. While retaining temporal consistency, these extracted mouth regions then replace the original mouth regions in the video sequence, synthesizing a reenactment video where the person seemingly gives the speech from the second audio track. Our system, coined S2TH (speech to talking head), does not require any special hardware to capture the 3D geometry of faces but uses the state-of-the-art method for facial geometry regression. We visually and subjectively demonstrate reenactment quality.","PeriodicalId":41874,"journal":{"name":"ITE Transactions on Media Technology and Applications","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3169/mta.8.60","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69652053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
[Paper] Preserved Color Pixel: high-resolution and high-colorfidelity image acquisition using single image sensor with sub-half-micron pixels [论文]保留彩色像素:利用亚半微米像素的单个图像传感器进行高分辨率、高保真图像采集
IF 1.1 Pub Date : 2020-01-01 DOI: 10.3169/mta.8.161
Y. Yamashita, R. Kuroda, S. Sugawa
The small camera module has been widely implemented in mobile devices such as the smartphone, and its image quality has been consistently improving owing to the progress of the technology and design of small camera module and image sensor device, combined with an advanced image post-processing algorithm supported by low-power and high-performance computing capabilities. Image resolution and color fidelity are part of crucial indices describing the image quality of camera modules employing a single image sensor with a color filter array (CFA), and the sampling frequency of pixel array has been increased by shrinking the pixel pitch while improving the intrinsic pixel performance. The significance of the conventional approach remains unchanged; whereas there will be emerging challenges as the pixel pitch shrinks, given the conditions that both the size of the camera module for the mobile device and the wavelength range of visible light are kept constant. It will be more difficult to confine electromagnetic energy of light by the micro-lens and wave-guide, and its leakage to the adjacent pixels, i.e., a cross-talk, comes to be more evident. The improvement of camera image quality with a single sensor has a possibility to hit a plateau; it is therefore expected to support the continuing improvement trend with an additional, such as computational, approach. With regard to the cross-talk correction, so far, the existing algorithms either assumes a cross-talk Abstract A preserved-color-pixel (PCP) concept is proposed. The PCP color filter array (CFA) is arranged to construct "PCP pixels". A PCP pixel is surrounded by "buffer pixels" having color filters of the same color spectrum as that of the PCP pixel, so that most of color cross-talk from pixels of different colors are absorbed by the buffer pixels. The color cross-talk components of the buffer-pixel signals are computationally canceled by a proposed non-parametric method called "similarity-based blind cross-talk correction (SBC)," where signals of PCP pixels are used as the ground truth to estimate the signals of buffer-pixels without influence of the crosstalk. The demosaicing of each color planes' images sampled with a PCP-CFA arrangement is implemented by the adaptive normalized convolution (ANC) in conjunction with the proposed "post-convolutional-variationminimization (PCVM)" algorithm for its cost function. Both SBC and PCVM-ANC are especially useful for image acquisition with a pixel array in a sub-half-micron generation, where its pixel pitch is approximately, or smaller than, 0.5 μm. The concept is verified with image simulation, and its effectiveness is quantified with the slantededge based spatial frequency response (SFR) modular transfer function (MTF) method by using the parametric color cross-talk analysis based on proposed "scalable-single-parameter (SSP)" color cross-talk model. The image simulation confirms the color reproductivity, together with the effectiveness of image resoluti
小型相机模块在智能手机等移动设备中得到了广泛的应用,由于小型相机模块和图像传感器设备的技术和设计的进步,再加上低功耗和高性能计算能力支持的先进图像后处理算法,其图像质量不断提高。图像分辨率和色彩保真度是描述带有彩色滤波阵列(CFA)的单图像传感器相机模块图像质量的关键指标,通过缩小像素间距提高像素阵列的采样频率,同时提高了固有的像素性能。传统方法的重要性仍然没有改变;然而,考虑到移动设备的相机模块尺寸和可见光波长范围保持不变的条件,随着像素间距的缩小,将会出现新的挑战。微透镜和波导对光电磁能量的限制将更加困难,其向相邻像素的泄漏,即串扰将更加明显。单传感器相机图像质量的提高有可能达到平台期;因此,预计它将通过额外的方法,例如计算方法,来支持持续改进的趋势。对于串扰校正,目前已有的算法要么假设存在串扰,要么假设存在串扰。排列PCP颜色滤波阵列(CFA)来构造“PCP像素”。PCP像素被具有与PCP像素相同颜色光谱的彩色过滤器的“缓冲像素”所包围,因此来自不同颜色像素的大部分彩色串扰被缓冲像素吸收。缓冲像素信号的彩色串扰分量通过一种被称为“基于相似性的盲串扰校正(SBC)”的非参数方法进行计算消除,其中PCP像素的信号被用作地面真值,以估计不受串扰影响的缓冲像素信号。通过自适应归一化卷积(ANC)和提出的“后卷积方差最小化(PCVM)”算法的代价函数,实现了用PCP-CFA排列采样的每个彩色平面图像的去马赛克。SBC和PCVM-ANC对于亚半微米级像素阵列的图像采集特别有用,其像素间距约为或小于0.5 μm。通过图像仿真验证了该概念的有效性,并利用基于“可缩放单参数”颜色串扰模型的参数颜色串扰分析,采用基于倾斜边缘的空间频率响应(SFR)模块化传递函数(MTF)方法对其有效性进行了量化。通过图像仿真,验证了在像素间颜色串扰复杂性和摄像镜侧色差(LCA)影响下,彩色再现性和图像分辨率提高的有效性。基于真实图像的模拟图像的峰值信噪比(PSNR)分析也验证了这一优势,表明所提出的概念在彩色串扰增加时可以保持PSNR。
{"title":"[Paper] Preserved Color Pixel: high-resolution and high-colorfidelity image acquisition using single image sensor with sub-half-micron pixels","authors":"Y. Yamashita, R. Kuroda, S. Sugawa","doi":"10.3169/mta.8.161","DOIUrl":"https://doi.org/10.3169/mta.8.161","url":null,"abstract":"The small camera module has been widely implemented in mobile devices such as the smartphone, and its image quality has been consistently improving owing to the progress of the technology and design of small camera module and image sensor device, combined with an advanced image post-processing algorithm supported by low-power and high-performance computing capabilities. Image resolution and color fidelity are part of crucial indices describing the image quality of camera modules employing a single image sensor with a color filter array (CFA), and the sampling frequency of pixel array has been increased by shrinking the pixel pitch while improving the intrinsic pixel performance. The significance of the conventional approach remains unchanged; whereas there will be emerging challenges as the pixel pitch shrinks, given the conditions that both the size of the camera module for the mobile device and the wavelength range of visible light are kept constant. It will be more difficult to confine electromagnetic energy of light by the micro-lens and wave-guide, and its leakage to the adjacent pixels, i.e., a cross-talk, comes to be more evident. The improvement of camera image quality with a single sensor has a possibility to hit a plateau; it is therefore expected to support the continuing improvement trend with an additional, such as computational, approach. With regard to the cross-talk correction, so far, the existing algorithms either assumes a cross-talk Abstract A preserved-color-pixel (PCP) concept is proposed. The PCP color filter array (CFA) is arranged to construct \"PCP pixels\". A PCP pixel is surrounded by \"buffer pixels\" having color filters of the same color spectrum as that of the PCP pixel, so that most of color cross-talk from pixels of different colors are absorbed by the buffer pixels. The color cross-talk components of the buffer-pixel signals are computationally canceled by a proposed non-parametric method called \"similarity-based blind cross-talk correction (SBC),\" where signals of PCP pixels are used as the ground truth to estimate the signals of buffer-pixels without influence of the crosstalk. The demosaicing of each color planes' images sampled with a PCP-CFA arrangement is implemented by the adaptive normalized convolution (ANC) in conjunction with the proposed \"post-convolutional-variationminimization (PCVM)\" algorithm for its cost function. Both SBC and PCVM-ANC are especially useful for image acquisition with a pixel array in a sub-half-micron generation, where its pixel pitch is approximately, or smaller than, 0.5 μm. The concept is verified with image simulation, and its effectiveness is quantified with the slantededge based spatial frequency response (SFR) modular transfer function (MTF) method by using the parametric color cross-talk analysis based on proposed \"scalable-single-parameter (SSP)\" color cross-talk model. The image simulation confirms the color reproductivity, together with the effectiveness of image resoluti","PeriodicalId":41874,"journal":{"name":"ITE Transactions on Media Technology and Applications","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69651216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
[Paper] Automotive OLED Display with High Mobility Top Gate IGZO TFT Backplane [论文]高迁移率顶栅IGZO TFT背板的车载OLED显示器
IF 1.1 Pub Date : 2020-01-01 DOI: 10.3169/mta.8.224
Yujiro Takeda, M. Aman, Shogo Murashige, Kazuatsu Ito, Ishida Izumi, Hiroshi Matsukizono, Naoki Makita
High performance IGZO TFTs with top gate structure were developed for an automotive OLED display backplane. Fabrication processes are optimized by balancing oxygen and hydrogen contents with µ-PCD method. The mobility of the IGZO TFTs reaches as high as 32 cm 2 /Vs with enhanced threshold voltages. We have checked the TFTs reliability under the positive bias temperature (PBT), negative bias temperature (NBT) and negative bias temperature illumination (NBTI) stress tests. As the IGZO TFTs shows slight changes of threshold voltage (V th ) within ±0.5V under PBT and NBT and even after NBTI stress tests, there is no critical deterioration. We expect these high mobility IGZO TFTs are stable enough to be used for OLED or other self-luminous displays. We have also demonstrated a prototype 12.3" OLED module for automotive applications. The prototype flexible display showed an excellent brightness uniformity even after bending.
开发了一种用于汽车OLED显示背板的高性能顶栅结构IGZO tft。通过µ-PCD法平衡氧和氢含量,优化了制造工艺。随着阈值电压的提高,IGZO tft的迁移率高达32 cm 2 /Vs。我们在正偏置温度(PBT)、负偏置温度(NBT)和负偏置温度照明(NBTI)应力测试下检验了TFTs的可靠性。由于IGZO TFTs在PBT和NBT下,甚至在NBTI应力测试后,阈值电压(V th)在±0.5V内变化很小,没有出现临界劣化。我们期望这些高迁移率的IGZO tft足够稳定,可以用于OLED或其他自发光显示器。我们还展示了用于汽车应用的12.3英寸OLED模块原型。原型柔性显示器在弯曲后仍显示出良好的亮度均匀性。
{"title":"[Paper] Automotive OLED Display with High Mobility Top Gate IGZO TFT Backplane","authors":"Yujiro Takeda, M. Aman, Shogo Murashige, Kazuatsu Ito, Ishida Izumi, Hiroshi Matsukizono, Naoki Makita","doi":"10.3169/mta.8.224","DOIUrl":"https://doi.org/10.3169/mta.8.224","url":null,"abstract":"High performance IGZO TFTs with top gate structure were developed for an automotive OLED display backplane. Fabrication processes are optimized by balancing oxygen and hydrogen contents with µ-PCD method. The mobility of the IGZO TFTs reaches as high as 32 cm 2 /Vs with enhanced threshold voltages. We have checked the TFTs reliability under the positive bias temperature (PBT), negative bias temperature (NBT) and negative bias temperature illumination (NBTI) stress tests. As the IGZO TFTs shows slight changes of threshold voltage (V th ) within ±0.5V under PBT and NBT and even after NBTI stress tests, there is no critical deterioration. We expect these high mobility IGZO TFTs are stable enough to be used for OLED or other self-luminous displays. We have also demonstrated a prototype 12.3\" OLED module for automotive applications. The prototype flexible display showed an excellent brightness uniformity even after bending.","PeriodicalId":41874,"journal":{"name":"ITE Transactions on Media Technology and Applications","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69651838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
[Paper] Analysis of Using Holes as Carriers in the Film in an 8K Stacked CMOS Image Sensor Overlaid with a Crystalline-Selenium Multiplication Layer [论文]晶体硒倍增层覆盖8K堆叠CMOS图像传感器薄膜中空穴作为载流子的分析
IF 1.1 Pub Date : 2020-01-01 DOI: 10.3169/mta.8.280
T. Arai, S. Imura, T. Watabe, Y. Honda, K. Mineo, K. Miyakawa, M. Nanba, M. Kubota
A prototyped 8K stacked CMOS image sensor overlaid with a crystalline-selenium-based avalanche-multiplication layer, in which holes are used as traveling carriers in the film, was fabricated. Analysis of energy-band diagrams through the film to the n-type floating-diffusion region revealed that (i) large spot noise in the captured image could be suppressed and (ii) the high voltage required for avalanche multiplication could be applied to the film by using holes as carriers even when defects existed in the film. According to the results of experiments, no large spot noise occurred when the voltage applied to the film was +5 V. Additionally, the photoelectric-conversion current was increased by 1.4 times compared to the saturation-signal level when the applied voltage was +21.6 V. These results confirm charge multiplication in a crystalline-selenium-based stacked CMOS image sensor.
制作了一种8K堆叠CMOS图像传感器原型,该传感器覆盖了基于晶体硒的雪崩倍增层,其中孔用作薄膜中的移动载流子。通过薄膜到n型浮动扩散区的能带图分析,发现(i)捕获图像中的大点噪声可以被抑制;(ii)即使薄膜中存在缺陷,也可以利用空穴作为载流子对薄膜施加雪崩倍增所需的高电压。实验结果表明,当施加在薄膜上的电压为+5 V时,不会产生较大的斑点噪声。此外,当外加电压为+21.6 V时,光电转换电流比饱和信号水平提高了1.4倍。这些结果证实了晶体硒基堆叠CMOS图像传感器中的电荷倍增。
{"title":"[Paper] Analysis of Using Holes as Carriers in the Film in an 8K Stacked CMOS Image Sensor Overlaid with a Crystalline-Selenium Multiplication Layer","authors":"T. Arai, S. Imura, T. Watabe, Y. Honda, K. Mineo, K. Miyakawa, M. Nanba, M. Kubota","doi":"10.3169/mta.8.280","DOIUrl":"https://doi.org/10.3169/mta.8.280","url":null,"abstract":"A prototyped 8K stacked CMOS image sensor overlaid with a crystalline-selenium-based avalanche-multiplication layer, in which holes are used as traveling carriers in the film, was fabricated. Analysis of energy-band diagrams through the film to the n-type floating-diffusion region revealed that (i) large spot noise in the captured image could be suppressed and (ii) the high voltage required for avalanche multiplication could be applied to the film by using holes as carriers even when defects existed in the film. According to the results of experiments, no large spot noise occurred when the voltage applied to the film was +5 V. Additionally, the photoelectric-conversion current was increased by 1.4 times compared to the saturation-signal level when the applied voltage was +21.6 V. These results confirm charge multiplication in a crystalline-selenium-based stacked CMOS image sensor.","PeriodicalId":41874,"journal":{"name":"ITE Transactions on Media Technology and Applications","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69651872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
[Invited papers] Comparing Approaches to Interactive Lifelog Search at the Lifelog Search Challenge (LSC2018) [受邀论文]在Lifelog Search Challenge(LSC2018)上比较交互式Lifelog Search的方法
IF 1.1 Pub Date : 2019-04-01 DOI: 10.3169/MTA.7.46
C. Gurrin, K. Schoeffmann, Hideo Joho, Andreas Leibetseder, Liting Zhou, Aaron Duane, Duc-Tien Dang-Nguyen, M. Riegler, Luca Piras, M. Tran, Jakub Lokoč, Wolfgang Hürst
The Lifelog Search Challenge (LSC) is an international content retrieval competition that evaluates search for personal lifelog data. At the LSC, content-based search is performed over a multi-modal dataset, continuously recorded by a lifelogger over 27 days, consisting of multimedia content, biometric data, human activity data, and information activities data. In this work, we report on the first LSC that took place in Yokohama, Japan in 2018 as a special workshop at ACM International Conference on Multimedia Retrieval 2018 (ICMR 2018). We describe the general idea of this challenge, summarise the participating search systems as well as the evaluation procedure, and analyse the search performance of the teams in various aspects. We try to identify reasons why some systems performed better than others and provide an outlook as well as open issues for upcoming iterations of the challenge.
Lifelog Search Challenge(LSC)是一项国际内容检索比赛,旨在评估个人生活日志数据的搜索。在LSC,基于内容的搜索是在多模式数据集上执行的,该数据集由生命记录器在27天内连续记录,包括多媒体内容、生物特征数据、人类活动数据和信息活动数据。在这项工作中,我们报道了2018年在日本横滨举行的第一届LSC,这是2018年ACM国际多媒体检索会议(ICMR 2018)的一个特别研讨会。我们描述了这一挑战的总体思路,总结了参与的搜索系统以及评估程序,并从各个方面分析了团队的搜索性能。我们试图找出一些系统比其他系统表现更好的原因,并为即将到来的挑战迭代提供前景和悬而未决的问题。
{"title":"[Invited papers] Comparing Approaches to Interactive Lifelog Search at the Lifelog Search Challenge (LSC2018)","authors":"C. Gurrin, K. Schoeffmann, Hideo Joho, Andreas Leibetseder, Liting Zhou, Aaron Duane, Duc-Tien Dang-Nguyen, M. Riegler, Luca Piras, M. Tran, Jakub Lokoč, Wolfgang Hürst","doi":"10.3169/MTA.7.46","DOIUrl":"https://doi.org/10.3169/MTA.7.46","url":null,"abstract":"The Lifelog Search Challenge (LSC) is an international content retrieval competition that evaluates search for personal lifelog data. At the LSC, content-based search is performed over a multi-modal dataset, continuously recorded by a lifelogger over 27 days, consisting of multimedia content, biometric data, human activity data, and information activities data. In this work, we report on the first LSC that took place in Yokohama, Japan in 2018 as a special workshop at ACM International Conference on Multimedia Retrieval 2018 (ICMR 2018). We describe the general idea of this challenge, summarise the participating search systems as well as the evaluation procedure, and analyse the search performance of the teams in various aspects. We try to identify reasons why some systems performed better than others and provide an outlook as well as open issues for upcoming iterations of the challenge.","PeriodicalId":41874,"journal":{"name":"ITE Transactions on Media Technology and Applications","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3169/MTA.7.46","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49145255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
[Paper] Dynamic PVLC: Pixel-level Visible Light Communication Projector with Interactive Update of Images and Data [论文]动态PVLC:具有图像和数据交互更新的像素级可见光通信投影仪
IF 1.1 Pub Date : 2019-01-01 DOI: 10.3169/mta.7.160
T. Hiraki, S. Fukushima, Hiroshi Watase, T. Naemura
We previously studied methods leveraging pixel-level visible light communication (PVLC) that embeds human eye imperceptible information in each pixel of an image. In this paper, we propose a dynamic PVLC system that offers high video quality and interactively updates the PVLC information through hardware encoding processing.
我们之前研究了利用像素级可见光通信(PVLC)的方法,该方法在图像的每个像素中嵌入人眼难以察觉的信息。本文提出了一种动态PVLC系统,该系统通过硬件编码处理,提供高质量的视频,并交互式地更新PVLC信息。
{"title":"[Paper] Dynamic PVLC: Pixel-level Visible Light Communication Projector with Interactive Update of Images and Data","authors":"T. Hiraki, S. Fukushima, Hiroshi Watase, T. Naemura","doi":"10.3169/mta.7.160","DOIUrl":"https://doi.org/10.3169/mta.7.160","url":null,"abstract":"We previously studied methods leveraging pixel-level visible light communication (PVLC) that embeds human eye imperceptible information in each pixel of an image. In this paper, we propose a dynamic PVLC system that offers high video quality and interactively updates the PVLC information through hardware encoding processing.","PeriodicalId":41874,"journal":{"name":"ITE Transactions on Media Technology and Applications","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69650247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
[Paper] Systems for Supporting Deaf People in Viewing Sports Programs by Using Sign Language Animation Synthesis [论文]手语动画合成辅助聋人观看体育节目系统
IF 1.1 Pub Date : 2019-01-01 DOI: 10.3169/MTA.7.126
Tsubasa Uchida, H. Sumiyoshi, Taro Miyazaki, Makiko Azuma, Shuichi Umeda, Naoto Kato, N. Hiruma, H. Kaneko, Y. Yamanouchi
In this paper, we propose display systems for supporting deaf and hard of hearing people in viewing sports programs by using Japanese Sign Language (JSL) animation synthesis. The synthesis can automatically produce JSL CG animation from live sports data during a game. We improved the synthesis to make sports-specific collocated motions by compounding several word motions. Utilizing the improved synthesis, we developed three prototype systems for displaying JSL CG animation and live sports video simultaneously: a web browser-based system, tablet application-based system and a tablet & TV system. We carried out a series of experiments to evaluate these systems by using real-time data from actual games, and the tablet & TV system was most preferred.
本文采用日语手语动画合成技术,设计了一种支持聋人、听障人士观看体育节目的显示系统。该合成程序可以在比赛期间根据现场体育数据自动生成JSL CG动画。我们改进了合成,通过合成几个单词动作来制作特定运动的搭配动作。利用改进后的综合,我们开发了三个原型系统,用于同时显示JSL CG动画和体育直播:基于web浏览器的系统,基于平板电脑应用程序的系统和平板电脑&电视系统。我们通过使用实际游戏的实时数据进行了一系列实验来评估这些系统,平板电脑和电视系统是最受欢迎的。
{"title":"[Paper] Systems for Supporting Deaf People in Viewing Sports Programs by Using Sign Language Animation Synthesis","authors":"Tsubasa Uchida, H. Sumiyoshi, Taro Miyazaki, Makiko Azuma, Shuichi Umeda, Naoto Kato, N. Hiruma, H. Kaneko, Y. Yamanouchi","doi":"10.3169/MTA.7.126","DOIUrl":"https://doi.org/10.3169/MTA.7.126","url":null,"abstract":"In this paper, we propose display systems for supporting deaf and hard of hearing people in viewing sports programs by using Japanese Sign Language (JSL) animation synthesis. The synthesis can automatically produce JSL CG animation from live sports data during a game. We improved the synthesis to make sports-specific collocated motions by compounding several word motions. Utilizing the improved synthesis, we developed three prototype systems for displaying JSL CG animation and live sports video simultaneously: a web browser-based system, tablet application-based system and a tablet & TV system. We carried out a series of experiments to evaluate these systems by using real-time data from actual games, and the tablet & TV system was most preferred.","PeriodicalId":41874,"journal":{"name":"ITE Transactions on Media Technology and Applications","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69650327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
ITE Transactions on Media Technology and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1