首页 > 最新文献

2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)最新文献

英文 中文
Scalable multiple GPU architecture for super multi-view synthesis using MVD 使用MVD进行超级多视图合成的可扩展多GPU架构
Byoungkyun Kim, Byeongho Choi, Youngbae Hwang
This paper presents a scalable multiple GPU architecture for super multi-view (SMV) synthesis using the multi-view video plus depth (MVD) data. SMV synthesis is essential to generate 3D contents for the SMV 3D display with hundred views. SMV 3D display, recently released to support 108 viewpoints, shows the multiplexed result of small viewing interval. Hence, we should synthesize the intermediate views over a hundred for each pair of two cameras in multi-camera system. View synthesis of more than hundred high resolution images, however, needs massive data processing, which is linearly increased in proportion to the number of synthesized views. In this paper, we propose a real-time SMV synthesis method using multiple GPU. The scalability of GPU can be utilized to reduce the processing time of view synthesis without any changes of the kernel function. We evaluate the proposed method for synthesizing 180 intermediate views from 18 input HD images according to the number of GPUs. We show that 180 intermediate views can be synthesized in real-time using 4 GPUs.
提出了一种基于多视点视频加深度(MVD)数据的可扩展多GPU超多视点(SMV)合成体系结构。SMV合成对于生成具有百视图的SMV 3D显示的3D内容至关重要。最近发布的SMV 3D显示器支持108视点,显示了小观看间隔的多路复用结果。因此,在多摄像机系统中,我们应该对每一对双摄像机的中间视图进行一百多个综合。然而,一百多张高分辨率图像的视图合成需要大量的数据处理,这些数据处理与合成视图的数量成线性比例增加。本文提出了一种基于多GPU的实时SMV合成方法。利用GPU的可扩展性,可以在不改变内核函数的情况下减少视图合成的处理时间。我们根据图形处理器的数量对所提出的方法进行了评估,该方法可以从18张输入的高清图像中合成180个中间视图。我们证明了使用4个gpu可以实时合成180个中间视图。
{"title":"Scalable multiple GPU architecture for super multi-view synthesis using MVD","authors":"Byoungkyun Kim, Byeongho Choi, Youngbae Hwang","doi":"10.1109/APSIPA.2016.7820787","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820787","url":null,"abstract":"This paper presents a scalable multiple GPU architecture for super multi-view (SMV) synthesis using the multi-view video plus depth (MVD) data. SMV synthesis is essential to generate 3D contents for the SMV 3D display with hundred views. SMV 3D display, recently released to support 108 viewpoints, shows the multiplexed result of small viewing interval. Hence, we should synthesize the intermediate views over a hundred for each pair of two cameras in multi-camera system. View synthesis of more than hundred high resolution images, however, needs massive data processing, which is linearly increased in proportion to the number of synthesized views. In this paper, we propose a real-time SMV synthesis method using multiple GPU. The scalability of GPU can be utilized to reduce the processing time of view synthesis without any changes of the kernel function. We evaluate the proposed method for synthesizing 180 intermediate views from 18 input HD images according to the number of GPUs. We show that 180 intermediate views can be synthesized in real-time using 4 GPUs.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"247 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122832929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Light field depth from multi-scale particle filtering 多尺度粒子滤波的光场深度
Jie Chen, Lap-Pui Chau, He Li
Rich information could be extracted from the high dimensional light field (LF) data, and one of the most fundamental output is scene depth. State-of-the-art depth calculation methods produce noisy calculations especially over texture-less regions. Based on Super-pixel segmentation, we propose to incorporate multi-level disparity information into a Bayesian Particle Filtering framework. Each pixels' individual as well as regional information are involved to give Maximum A Posteriori (MAP) predictions based on our proposed statistical model. The method can produce equivalent or better scene depth interpolation results than some of the state-of-the art methods, with possible potential in image processing applications such as scene alignment and stablization.
高维光场数据可以提取丰富的信息,其中最基本的输出之一就是景深。最先进的深度计算方法会产生噪声计算,特别是在没有纹理的区域。在超像素分割的基础上,提出将多层次视差信息融合到贝叶斯粒子滤波框架中。基于我们提出的统计模型,利用每个像素的个体信息和区域信息给出最大后验A (MAP)预测。该方法可以产生与一些最先进的方法相当或更好的场景深度插值结果,在场景对齐和稳定等图像处理应用中具有潜在的潜力。
{"title":"Light field depth from multi-scale particle filtering","authors":"Jie Chen, Lap-Pui Chau, He Li","doi":"10.1109/APSIPA.2016.7820906","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820906","url":null,"abstract":"Rich information could be extracted from the high dimensional light field (LF) data, and one of the most fundamental output is scene depth. State-of-the-art depth calculation methods produce noisy calculations especially over texture-less regions. Based on Super-pixel segmentation, we propose to incorporate multi-level disparity information into a Bayesian Particle Filtering framework. Each pixels' individual as well as regional information are involved to give Maximum A Posteriori (MAP) predictions based on our proposed statistical model. The method can produce equivalent or better scene depth interpolation results than some of the state-of-the art methods, with possible potential in image processing applications such as scene alignment and stablization.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116524947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Mandarin citation tone patterns of prelingual Chinese deaf adults 中国语前聋成人普通话引用语调模式
Yanting Chen, Yu Chen, Jin Zhang, Ju Zhang, Hua Lin, Jianguo Wei, J. Dang
The present study examined the citation patterns of Mandarin tones in prelingual deaf adults with cochelar implants or hearing aids. The results showed that the participants tried to build up tonal pattern by exploring phonetic features such as creaky voice and tonal duration. The results also indicated that although the participants had problems distinguishing T2 from T3, T2 was harder than T3 for them. In fact, T2 was the hardest of all Mandarin tones for these prelingual deaf adults.
本研究考察了使用人工耳蜗或助听器的语前聋成人普通话声调的引用模式。结果表明,参与者通过探索声音的吱吱声和音调的持续时间等语音特征,试图建立音调模式。结果还表明,尽管参与者在区分T2和T3方面存在困难,但T2对他们来说比T3更难。事实上,对于这些语前失聪的成年人来说,T2是所有普通话声调中最难的。
{"title":"Mandarin citation tone patterns of prelingual Chinese deaf adults","authors":"Yanting Chen, Yu Chen, Jin Zhang, Ju Zhang, Hua Lin, Jianguo Wei, J. Dang","doi":"10.1109/APSIPA.2016.7820806","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820806","url":null,"abstract":"The present study examined the citation patterns of Mandarin tones in prelingual deaf adults with cochelar implants or hearing aids. The results showed that the participants tried to build up tonal pattern by exploring phonetic features such as creaky voice and tonal duration. The results also indicated that although the participants had problems distinguishing T2 from T3, T2 was harder than T3 for them. In fact, T2 was the hardest of all Mandarin tones for these prelingual deaf adults.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127696957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fusion of color and depth information for image segmentation 融合颜色和深度信息的图像分割
Jan Kristanto Wibisono, H. Hang
The goal of this research is to fuse color and depth information to generate good image segmentation. The image segmentation topic has been studied for several decades. But only recently the use of depth data becomes popular due to the wide spread of affordable and accessible depth cameras such as Microsoft Kinect. The availability of depth information opens up new opportunities for image segmentation. Many methods have developed on color image segmentation over the years. Only recently, several papers are published on image segmentation using both the depth information and the color information. In this research, we focus on how to combine the depth and color information to improve the state of art color image segmentation methods. We adopt a few existing schemes and fuse their outputs to produce the final results. We exploit the planar information to improve the color segmentation. The result is quite satisfactory on both human perception and objective measures.
本研究的目标是融合颜色和深度信息,以产生良好的图像分割。图像分割这个课题已经研究了几十年。但直到最近,深度数据的使用才开始流行起来,因为微软Kinect等价格实惠且易于使用的深度相机广泛普及。深度信息的可用性为图像分割提供了新的机会。近年来,在彩色图像分割方面出现了许多新的方法。直到最近,才有几篇论文同时使用深度信息和颜色信息进行图像分割。在本研究中,我们重点研究了如何将深度和颜色信息结合起来,以改进目前的彩色图像分割方法。我们采用一些现有的方案,并融合它们的输出来产生最终的结果。我们利用平面信息来改进颜色分割。结果在人的感知和客观测量上都是令人满意的。
{"title":"Fusion of color and depth information for image segmentation","authors":"Jan Kristanto Wibisono, H. Hang","doi":"10.1109/APSIPA.2016.7820913","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820913","url":null,"abstract":"The goal of this research is to fuse color and depth information to generate good image segmentation. The image segmentation topic has been studied for several decades. But only recently the use of depth data becomes popular due to the wide spread of affordable and accessible depth cameras such as Microsoft Kinect. The availability of depth information opens up new opportunities for image segmentation. Many methods have developed on color image segmentation over the years. Only recently, several papers are published on image segmentation using both the depth information and the color information. In this research, we focus on how to combine the depth and color information to improve the state of art color image segmentation methods. We adopt a few existing schemes and fuse their outputs to produce the final results. We exploit the planar information to improve the color segmentation. The result is quite satisfactory on both human perception and objective measures.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128041423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Blur kernel re-initialization for blind image deblurring 用于盲图像去模糊的模糊内核重新初始化
Hyukzae Lee, Changick Kim
We propose a simple yet effective blur kernel re-initialization method in a coarse-to-fine framework for blind image deblurring. The proposed method is motivated by observing that most deblurring algorithms use only an estimated blur kernel at the coarser level to initialize a blur kernel for the next finer level. Based on this observation, we design an objective function to exploit both a blur kernel and an latent image estimated at the coarser level to produce an initial blur kernel for the finer level. Experimental results demonstrate that the proposed algorithm improves performance of the existing deblurring algorithms in terms of accuracy and success rate.
提出了一种简单而有效的模糊核重初始化方法,用于图像去模糊。所提出的方法的动机是观察到大多数去模糊算法只使用粗级别估计的模糊核来初始化下一个细级别的模糊核。基于这一观察结果,我们设计了一个目标函数来利用模糊核和在较粗水平估计的潜在图像来产生较细水平的初始模糊核。实验结果表明,该算法在准确率和成功率方面均优于现有的去模糊算法。
{"title":"Blur kernel re-initialization for blind image deblurring","authors":"Hyukzae Lee, Changick Kim","doi":"10.1109/APSIPA.2016.7820853","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820853","url":null,"abstract":"We propose a simple yet effective blur kernel re-initialization method in a coarse-to-fine framework for blind image deblurring. The proposed method is motivated by observing that most deblurring algorithms use only an estimated blur kernel at the coarser level to initialize a blur kernel for the next finer level. Based on this observation, we design an objective function to exploit both a blur kernel and an latent image estimated at the coarser level to produce an initial blur kernel for the finer level. Experimental results demonstrate that the proposed algorithm improves performance of the existing deblurring algorithms in terms of accuracy and success rate.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128066193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An energy efficient routing protocol with stable cluster head for reactive wireless sensor networks 响应式无线传感器网络中具有稳定簇头的高能效路由协议
T. Samanchuen
Wireless sensor networks (WSNs) were designed for monitoring environment that is difficult to access. The energy of each node has its limit and cannot be replaced or recharged. All components of WSNs must be an energy efficient component, not only hardware component but also software component. Energy efficient routing protocol can prolong the networks lifetime. Reactive WSNs is addressed in this work. A protocol using static clustering technique with cluster head selection based on maximum residual energy is proposed. Simulation is performed to demonstrate the performance of the proposed protocol. It is shown that the proposed protocol can prolong the network lifetime better than that of the conventional protocols.
无线传感器网络(WSNs)是针对难以进入的环境监测而设计的。每个节点的能量都是有限的,不能被替换或充电。无线传感器网络的所有组件都必须是节能组件,不仅是硬件组件,而且是软件组件。高效节能的路由协议可以延长网络的生命周期。响应式无线传感器网络在这项工作中得到了解决。提出了一种基于最大剩余能量选择簇头的静态聚类协议。通过仿真验证了所提协议的性能。实验表明,该协议比传统协议能更好地延长网络生存期。
{"title":"An energy efficient routing protocol with stable cluster head for reactive wireless sensor networks","authors":"T. Samanchuen","doi":"10.1109/APSIPA.2016.7820793","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820793","url":null,"abstract":"Wireless sensor networks (WSNs) were designed for monitoring environment that is difficult to access. The energy of each node has its limit and cannot be replaced or recharged. All components of WSNs must be an energy efficient component, not only hardware component but also software component. Energy efficient routing protocol can prolong the networks lifetime. Reactive WSNs is addressed in this work. A protocol using static clustering technique with cluster head selection based on maximum residual energy is proposed. Simulation is performed to demonstrate the performance of the proposed protocol. It is shown that the proposed protocol can prolong the network lifetime better than that of the conventional protocols.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"62 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132197686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Personal binocular vision calibration using layered random dot stereogram 基于分层随机点立体图的个人双目视觉标定
Min-Koo Kang, Sung-Kyu Kim
Visual discomfort (VD) is inevitable as long as stereoscopy is used in 3D displays, and there's a trade-off between depth impression and visual comfort. For this reason, technologies that control depth impression considering VD perception have attracted great interest of researchers. However, VD perception significantly varies according to various personal-factors as well as environmental factors, and evaluating VD perception still takes a lot of time and effort for viewing tests. We propose a simple and reliable method that calibrates stereo acuities, binocular fusion limits, and preferences for depth perception of individuals. For the experiment, four non-expert viewers attended, and the same viewing conditions were given to them. The experimental result confirmed that calibrated features in human binocular vision coincide with the literature except for slight variations among the attendees. The proposed method would be utilized across the whole 3D video technology chain from video capture to the display.
只要在3D显示器中使用立体视觉,视觉不适(VD)是不可避免的,并且在深度印象和视觉舒适之间存在权衡。因此,考虑VD感知的深度印象控制技术引起了研究人员的极大兴趣。然而,VD感知因各种个人因素和环境因素而有显著差异,评估VD感知仍然需要花费大量的时间和精力进行观看测试。我们提出了一种简单可靠的方法来校准立体敏锐度、双眼融合极限和个人对深度感知的偏好。在实验中,有四名非专业观众参加,并给予他们相同的观看条件。实验结果证实,除了参与者之间的细微差异外,人类双目视觉的校准特征与文献一致。该方法将应用于从视频采集到显示的整个3D视频技术链。
{"title":"Personal binocular vision calibration using layered random dot stereogram","authors":"Min-Koo Kang, Sung-Kyu Kim","doi":"10.1109/APSIPA.2016.7820838","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820838","url":null,"abstract":"Visual discomfort (VD) is inevitable as long as stereoscopy is used in 3D displays, and there's a trade-off between depth impression and visual comfort. For this reason, technologies that control depth impression considering VD perception have attracted great interest of researchers. However, VD perception significantly varies according to various personal-factors as well as environmental factors, and evaluating VD perception still takes a lot of time and effort for viewing tests. We propose a simple and reliable method that calibrates stereo acuities, binocular fusion limits, and preferences for depth perception of individuals. For the experiment, four non-expert viewers attended, and the same viewing conditions were given to them. The experimental result confirmed that calibrated features in human binocular vision coincide with the literature except for slight variations among the attendees. The proposed method would be utilized across the whole 3D video technology chain from video capture to the display.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"87 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130326564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A facial expression model with generative albedo texture 具有生成反照率纹理的面部表情模型
Songnan Li, Fanzi Wu, Tianhao Zhao, Ran Shi, K. Ngan
A facial expression model (FEM) is developed which can synthesize various face shapes and albedo textures. The face shape varies with individuals and expressions. FEM synthesizes these shape variations by using a bilinear face model built from the Face Warehouse Database. On the other hand, the generative albedo texture is directly extracted from a neutral face model — the Basel Face Model. In this paper, we elaborate the model construction process and demonstrate its application in face reconstruction and expression tracking.
建立了一种能够综合各种面部形状和反照率纹理的面部表情模型。脸型因个人和表情而异。有限元法利用人脸仓库数据库建立的双线性人脸模型综合这些形状变化。另一方面,生成反照率纹理是直接从中性人脸模型-巴塞尔人脸模型中提取的。本文详细阐述了模型的构建过程,并演示了其在人脸重建和表情跟踪中的应用。
{"title":"A facial expression model with generative albedo texture","authors":"Songnan Li, Fanzi Wu, Tianhao Zhao, Ran Shi, K. Ngan","doi":"10.1109/APSIPA.2016.7820866","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820866","url":null,"abstract":"A facial expression model (FEM) is developed which can synthesize various face shapes and albedo textures. The face shape varies with individuals and expressions. FEM synthesizes these shape variations by using a bilinear face model built from the Face Warehouse Database. On the other hand, the generative albedo texture is directly extracted from a neutral face model — the Basel Face Model. In this paper, we elaborate the model construction process and demonstrate its application in face reconstruction and expression tracking.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131958865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Consideration on performance improvement of shadow and reflection removal based on GMM 基于GMM的阴影和反射去除性能改进的思考
K. Nishikawa, Yoshihiro Yamashita, Toru Yamaguchi, T. Nishitani
Wearable devices are expected to provide a ubiquitous network connection in the near future. In this paper, we consider systems which uses human finger gestures as an input device. To assure accurate input characteristics, the shape of arm and fingers should be captured clearly, and for that purpose we consider using the Gaussian mixture model (GMM) foreground segmentation. It is known that shadow or reflection in the frame image affects the performance of GMM foreground segmentation. A low computational shadow or reflection removal methods are proposed [1]-[3] which are suitable to be implemented in wearable devices. Although the methods improve the foreground segmentation performance, the results depend on the characteristics of the video. In this paper, we consider improving the performance of the methods by modifying the equation for deciding the shadow region. Through the computer simulations, we show the effectiveness of the proposed method.
可穿戴设备有望在不久的将来提供无处不在的网络连接。在本文中,我们考虑使用人类手指手势作为输入设备的系统。为了保证准确的输入特征,手臂和手指的形状应该被清晰地捕获,为此我们考虑使用高斯混合模型(GMM)前景分割。已知帧图像中的阴影或反射会影响GMM前景分割的性能。提出了一种适合在可穿戴设备中实现的低计算阴影或反射去除方法[1]-[3]。虽然这些方法提高了前景分割的性能,但结果取决于视频的特性。在本文中,我们考虑通过修改确定阴影区域的方程来提高方法的性能。通过计算机仿真,验证了该方法的有效性。
{"title":"Consideration on performance improvement of shadow and reflection removal based on GMM","authors":"K. Nishikawa, Yoshihiro Yamashita, Toru Yamaguchi, T. Nishitani","doi":"10.1109/APSIPA.2016.7820902","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820902","url":null,"abstract":"Wearable devices are expected to provide a ubiquitous network connection in the near future. In this paper, we consider systems which uses human finger gestures as an input device. To assure accurate input characteristics, the shape of arm and fingers should be captured clearly, and for that purpose we consider using the Gaussian mixture model (GMM) foreground segmentation. It is known that shadow or reflection in the frame image affects the performance of GMM foreground segmentation. A low computational shadow or reflection removal methods are proposed [1]-[3] which are suitable to be implemented in wearable devices. Although the methods improve the foreground segmentation performance, the results depend on the characteristics of the video. In this paper, we consider improving the performance of the methods by modifying the equation for deciding the shadow region. Through the computer simulations, we show the effectiveness of the proposed method.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116982543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An improved LEA block encryption algorithm to prevent side-channel attack in the IoT system 一种改进的LEA块加密算法,防止物联网系统中的侧信道攻击
Jaehak Choi, Youngseop Kim
Devices of IoT (Internet of Things) are limited in resources such as CPU, memory etc. The LEA (Lightweight Encryption Algorithm) was standardized as the encryption algorithm suitable for IoT devices in Korea in 2013. However, LEA is vulnerable to the side-channel analysis attack using consumed electric power. To supplement this vulnerability, masking technique is mainly used. However, in case of masking process, the implementation time is increased, losing the characteristics of speedup and lightening. This paper proposes a new and faster LEA algorithm as a countermeasure to the side-channel attack. The proposed algorithm is about 17 times faster than existing algorithms with the masking process to prevent differential side-channel attack.
物联网设备的资源有限,如CPU、内存等。2013年,轻型加密算法(LEA)在国内被标准化为适用于物联网设备的加密算法。然而,LEA很容易受到侧信道分析攻击,使用消耗的电力。为了弥补这一漏洞,主要采用掩蔽技术。但是,如果采用掩模处理,则会增加实现时间,失去加速和光亮的特性。本文提出了一种新的更快的LEA算法来对抗侧信道攻击。该算法在防止差分侧信道攻击方面比现有算法快17倍左右。
{"title":"An improved LEA block encryption algorithm to prevent side-channel attack in the IoT system","authors":"Jaehak Choi, Youngseop Kim","doi":"10.1109/APSIPA.2016.7820845","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820845","url":null,"abstract":"Devices of IoT (Internet of Things) are limited in resources such as CPU, memory etc. The LEA (Lightweight Encryption Algorithm) was standardized as the encryption algorithm suitable for IoT devices in Korea in 2013. However, LEA is vulnerable to the side-channel analysis attack using consumed electric power. To supplement this vulnerability, masking technique is mainly used. However, in case of masking process, the implementation time is increased, losing the characteristics of speedup and lightening. This paper proposes a new and faster LEA algorithm as a countermeasure to the side-channel attack. The proposed algorithm is about 17 times faster than existing algorithms with the masking process to prevent differential side-channel attack.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"211 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115300505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
期刊
2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1