首页 > 最新文献

2015 IEEE International Conference on Consumer Electronics (ICCE)最新文献

英文 中文
A hybrid architecture based on TS and HTTP for real-time 3D video transmission 一种基于TS和HTTP的三维实时视频传输混合架构
Pub Date : 2015-03-26 DOI: 10.1109/ICCE.2015.7066347
K. Yun, W. Cheong, Jin Young Lee, Kyuheon Kim, Gwangsoon Lee
This paper introduces a hybrid architecture for efficient 3D video transmission on a legacy DTV channel and IP network. The hybrid architecture specifically includes a robust synchronization method on heterogeneous networks, adaptive streaming of the 3D additional view by the ISO/IEC 23009-1 DASH and transport stream system target decoder (T-STD) model for stable playback of both views. Based on experimental results, we confirm that the proposed architecture can be used as a core technology in hybrid 3DTV broadcasting and a reference model for development of various hybrid services.
本文介绍了一种在传统数字电视频道和IP网络上实现高效3D视频传输的混合体系结构。混合架构特别包括异构网络上的鲁棒同步方法,ISO/IEC 23009-1 DASH的3D附加视图的自适应流和传输流系统目标解码器(T-STD)模型,用于稳定回放两种视图。实验结果表明,该架构可作为混合3DTV广播的核心技术,并可作为各种混合业务开发的参考模型。
{"title":"A hybrid architecture based on TS and HTTP for real-time 3D video transmission","authors":"K. Yun, W. Cheong, Jin Young Lee, Kyuheon Kim, Gwangsoon Lee","doi":"10.1109/ICCE.2015.7066347","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066347","url":null,"abstract":"This paper introduces a hybrid architecture for efficient 3D video transmission on a legacy DTV channel and IP network. The hybrid architecture specifically includes a robust synchronization method on heterogeneous networks, adaptive streaming of the 3D additional view by the ISO/IEC 23009-1 DASH and transport stream system target decoder (T-STD) model for stable playback of both views. Based on experimental results, we confirm that the proposed architecture can be used as a core technology in hybrid 3DTV broadcasting and a reference model for development of various hybrid services.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114805439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Heterogeneous media communications for future wireless local area networks 未来无线局域网的异构媒体通信
Pub Date : 2015-03-26 DOI: 10.1109/ICCE.2015.7066560
T. Nishio, M. Morikura, Koji Yamamoto
Many media exist for communication, such as LTE, IEEE 802.11 wireless local area networks (WLANs), millimeter-wave communications, and visible light communications (VLCs), and a lot of research has been conducted to find methods to improve the performance of each medium. However, the use of a single medium for communication limits the performance upper bound that can be achieved by using more than one medium for communication. Moreover, some media are widely used, while others are not because their use cases are limited. Therefore, the more commonly used media still suffer from a lack of bandwidth, while bandwidth for other media types is abundant. In this paper, we propose a heterogeneous media communications (HeMCOM) framework, where multiple media are used for leveraging the abundant bandwidth and increasing the total communication performance. HeMCOM focuses on leveraging the difference of the PHY and MAC characteristics of each medium. This paper summarizes the HeMCOM concept, introduces related works from the point of view of this concept, and discusses the possibility of using several types of media.
目前存在许多通信介质,如LTE、IEEE 802.11无线局域网(wlan)、毫米波通信和可见光通信(vlc),人们已经进行了大量的研究,以寻找提高每种介质性能的方法。然而,使用单一的通信媒介限制了使用一种以上的通信媒介所能达到的性能上限。此外,一些媒体被广泛使用,而另一些则没有,因为它们的用例是有限的。因此,较常用的媒体仍然存在带宽不足的问题,而其他媒体类型的带宽是充足的。在本文中,我们提出了一种异构媒体通信(HeMCOM)框架,其中使用多种媒体来利用丰富的带宽并提高总体通信性能。HeMCOM着重于利用每种介质的PHY和MAC特性的差异。本文总结了HeMCOM的概念,从这个概念出发介绍了相关的工作,并讨论了几种介质使用的可能性。
{"title":"Heterogeneous media communications for future wireless local area networks","authors":"T. Nishio, M. Morikura, Koji Yamamoto","doi":"10.1109/ICCE.2015.7066560","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066560","url":null,"abstract":"Many media exist for communication, such as LTE, IEEE 802.11 wireless local area networks (WLANs), millimeter-wave communications, and visible light communications (VLCs), and a lot of research has been conducted to find methods to improve the performance of each medium. However, the use of a single medium for communication limits the performance upper bound that can be achieved by using more than one medium for communication. Moreover, some media are widely used, while others are not because their use cases are limited. Therefore, the more commonly used media still suffer from a lack of bandwidth, while bandwidth for other media types is abundant. In this paper, we propose a heterogeneous media communications (HeMCOM) framework, where multiple media are used for leveraging the abundant bandwidth and increasing the total communication performance. HeMCOM focuses on leveraging the difference of the PHY and MAC characteristics of each medium. This paper summarizes the HeMCOM concept, introduces related works from the point of view of this concept, and discusses the possibility of using several types of media.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127060996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Resolving ADAS imaging subsystem functional safety quagmire 解决ADAS成像子系统功能安全困境
Pub Date : 2015-03-26 DOI: 10.1109/ICCE.2015.7066419
R. Gulati, V. Easwaran, P. Karandikar, Mihir Mody, Prithvi Shankar
Nowadays it has become common practice to use multi core SoCs in safety related Advanced Driver Assistance Systems (ADAS). The ISO 26262 functional safety standard provides requirements to avoid or reduce the risk caused by these systems. In safety related systems, a comprehensive test strategy is required to guarantee successful normal operation for the SoC throughout its life cycle. Software based self-tests have been proposed as an effective alternative to hardware based self-tests in order to eliminate area and save new hardware IP development costs. This paper proposes software based self-test scheme to ensure integrity of Imaging subsystems to prevent violation of the defined safety goals for several camera based ADAS applications. The proposal uses a hand crafted functional time triggered non-concurrent online test based solution, The proposed solution covers permanent and intermittent faults in imaging sub-systems by introducing a known golden reference image processing run every fault tolerant time interval. For a sample 1080p30 input capture, considering a fault tolerant time interval of 300ms for a typical ADAS application and considering that this hand-crafted test pattern is run after every 8 frames, the proposed solution enables the hardware self-test at an additional 12.5% clocking requirement for the imaging sub-system and an additional 12.5% DDR throughput requirement.
如今,在安全相关的高级驾驶辅助系统(ADAS)中使用多核soc已成为普遍做法。ISO 26262功能安全标准提供了避免或减少这些系统造成的风险的要求。在与安全相关的系统中,需要一个全面的测试策略来保证SoC在其整个生命周期内成功正常运行。基于软件的自我测试被认为是一种有效的替代基于硬件的自我测试的方法,以消除空间并节省新的硬件IP开发成本。本文提出了一种基于软件的自检方案,以确保成像子系统的完整性,防止几种基于摄像头的ADAS应用违反定义的安全目标。该方案通过引入每个容错时间间隔的已知黄金参考图像处理运行,涵盖了成像子系统中的永久性和间歇性故障。对于1080p30输入捕获样本,考虑到典型ADAS应用程序的容错时间间隔为300ms,并且考虑到这种手工制作的测试模式每8帧运行一次,所提出的解决方案使硬件自检能够在成像子系统额外的12.5%时钟要求和额外的12.5% DDR吞吐量要求下进行。
{"title":"Resolving ADAS imaging subsystem functional safety quagmire","authors":"R. Gulati, V. Easwaran, P. Karandikar, Mihir Mody, Prithvi Shankar","doi":"10.1109/ICCE.2015.7066419","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066419","url":null,"abstract":"Nowadays it has become common practice to use multi core SoCs in safety related Advanced Driver Assistance Systems (ADAS). The ISO 26262 functional safety standard provides requirements to avoid or reduce the risk caused by these systems. In safety related systems, a comprehensive test strategy is required to guarantee successful normal operation for the SoC throughout its life cycle. Software based self-tests have been proposed as an effective alternative to hardware based self-tests in order to eliminate area and save new hardware IP development costs. This paper proposes software based self-test scheme to ensure integrity of Imaging subsystems to prevent violation of the defined safety goals for several camera based ADAS applications. The proposal uses a hand crafted functional time triggered non-concurrent online test based solution, The proposed solution covers permanent and intermittent faults in imaging sub-systems by introducing a known golden reference image processing run every fault tolerant time interval. For a sample 1080p30 input capture, considering a fault tolerant time interval of 300ms for a typical ADAS application and considering that this hand-crafted test pattern is run after every 8 frames, the proposed solution enables the hardware self-test at an additional 12.5% clocking requirement for the imaging sub-system and an additional 12.5% DDR throughput requirement.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126182402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Video summarization based on extracted key position of spotted objects 基于斑点目标关键位置提取的视频摘要
Pub Date : 2015-03-26 DOI: 10.1109/ICCE.2015.7066433
Zaur Fataliyev, D. Han, Y. Imamverdiyev, Hanseok Ko
This paper proposes a novel fusion method for summarization of surveillance videos based on extracted key positions of spotted objects in observed area. Accumulated energy of object is calculated by analyzing its motion pattern for key position extraction. The method allows to summarize long videos into a single index frame. Experimental results demonstrate its effectiveness.
本文提出了一种基于提取观测区域内斑点目标关键位置的监控视频融合摘要方法。通过分析目标的运动模式,计算目标的累积能量,提取关键位置。该方法允许将长视频汇总到单个索引帧中。实验结果证明了该方法的有效性。
{"title":"Video summarization based on extracted key position of spotted objects","authors":"Zaur Fataliyev, D. Han, Y. Imamverdiyev, Hanseok Ko","doi":"10.1109/ICCE.2015.7066433","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066433","url":null,"abstract":"This paper proposes a novel fusion method for summarization of surveillance videos based on extracted key positions of spotted objects in observed area. Accumulated energy of object is calculated by analyzing its motion pattern for key position extraction. The method allows to summarize long videos into a single index frame. Experimental results demonstrate its effectiveness.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126978559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Dynamic integration of appliances into ZigBee home networks through web services 通过web服务将家电动态集成到ZigBee家庭网络中
Pub Date : 2015-03-26 DOI: 10.1109/ICCE.2015.7066396
R. Valente, Waldir Sabino da Silva, V. Lucena
This paper presents an effective system for dynamic integration of appliances into a ZigBee home network through home gateways based on Web services. The proposed architecture will be described along the paper. The resulting system has been implemented and was used in experiments in a home network test bed in order to prove its feasibility and effectiveness. The obtained results are promising. This new architecture is expected to contribute to the development of ubiquitous service systems for home network domains using consumer electronic devices.
提出了一种基于Web服务的家庭网关动态集成ZigBee家庭网络的有效系统。本文将对所建议的体系结构进行描述。该系统已在某家庭网络试验台上实现并进行了实验,验证了其可行性和有效性。所得结果是有希望的。这种新的体系结构有望为使用消费电子设备的家庭网络域的无所不在的服务系统的发展做出贡献。
{"title":"Dynamic integration of appliances into ZigBee home networks through web services","authors":"R. Valente, Waldir Sabino da Silva, V. Lucena","doi":"10.1109/ICCE.2015.7066396","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066396","url":null,"abstract":"This paper presents an effective system for dynamic integration of appliances into a ZigBee home network through home gateways based on Web services. The proposed architecture will be described along the paper. The resulting system has been implemented and was used in experiments in a home network test bed in order to prove its feasibility and effectiveness. The obtained results are promising. This new architecture is expected to contribute to the development of ubiquitous service systems for home network domains using consumer electronic devices.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124849852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning for arbitrary downsizing of pre-encoded video in HEVC HEVC中任意缩小预编码视频的机器学习
Pub Date : 2015-03-26 DOI: 10.1109/ICCE.2015.7066464
Luong Pham Van, J. D. Praeter, G. Wallendael, J. D. Cock, R. Walle
In this paper, we propose a machine learning based transcoding scheme for arbitrarily downsizing a pre-encoded High Efficiency Video Coding video. The spatial scaling factor can be freely selected to adapt the output bit rate to the bandwidth of the network. Furthermore, machine learning techniques can exploit the correlation between input and output coding information to predict the split-flag of coding units in a P-frame. We analyzed the performance of both offline and online training in the learning phase of transcoding. The experimental results show that the proposed techniques significantly reduce the transcoding complexity and achieve trade-offs between coding performance and complexity. In addition, we demonstrate that online training performs better than offline training.
在本文中,我们提出了一种基于机器学习的转码方案,用于任意缩小预编码的高效视频编码视频。空间比例因子可以自由选择,使输出比特率适应网络带宽。此外,机器学习技术可以利用输入和输出编码信息之间的相关性来预测p帧中编码单元的分裂标志。我们分析了离线和在线培训在转码学习阶段的表现。实验结果表明,所提技术显著降低了转码复杂度,实现了编码性能与复杂度之间的平衡。此外,我们证明了在线培训比离线培训表现更好。
{"title":"Machine learning for arbitrary downsizing of pre-encoded video in HEVC","authors":"Luong Pham Van, J. D. Praeter, G. Wallendael, J. D. Cock, R. Walle","doi":"10.1109/ICCE.2015.7066464","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066464","url":null,"abstract":"In this paper, we propose a machine learning based transcoding scheme for arbitrarily downsizing a pre-encoded High Efficiency Video Coding video. The spatial scaling factor can be freely selected to adapt the output bit rate to the bandwidth of the network. Furthermore, machine learning techniques can exploit the correlation between input and output coding information to predict the split-flag of coding units in a P-frame. We analyzed the performance of both offline and online training in the learning phase of transcoding. The experimental results show that the proposed techniques significantly reduce the transcoding complexity and achieve trade-offs between coding performance and complexity. In addition, we demonstrate that online training performs better than offline training.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125000696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
No-reference video quality metric for streaming service using DASH standard 使用DASH标准的流媒体服务的无参考视频质量度量
Pub Date : 2015-03-26 DOI: 10.1109/ICCE.2015.7066339
D. Z. Rodríguez, R. L. Rosa, G. Bressan
This work proposes a no-reference video quality metric that considers two parameters, pauses and changes in video resolution. Results indicate that users' Quality-of-Experience (QoE) is highly correlated with these parameters. The proposed metric has low complexity because it is based on application-level parameters; it can, therefore, be easily implemented in consumer electronic devices.
这项工作提出了一个无参考视频质量指标,它考虑了两个参数,暂停和视频分辨率的变化。结果表明,用户体验质量(QoE)与这些参数高度相关。该度量方法基于应用级参数,复杂度较低;因此,它可以很容易地在消费电子设备中实现。
{"title":"No-reference video quality metric for streaming service using DASH standard","authors":"D. Z. Rodríguez, R. L. Rosa, G. Bressan","doi":"10.1109/ICCE.2015.7066339","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066339","url":null,"abstract":"This work proposes a no-reference video quality metric that considers two parameters, pauses and changes in video resolution. Results indicate that users' Quality-of-Experience (QoE) is highly correlated with these parameters. The proposed metric has low complexity because it is based on application-level parameters; it can, therefore, be easily implemented in consumer electronic devices.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123252797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Better together: Fusing visual saliency methods for retrieving perceptually-similar images 更好地结合:融合视觉显著性方法来检索感知相似的图像
Pub Date : 2015-03-26 DOI: 10.1109/ICCE.2015.7066502
Amanda Fernandez, Siwei Lyu
In this paper, we describe a new model of visual saliency by fusing results from existing saliency methods. We first briefly survey existing saliency models, and justify the fusion methods as they take advantage of the strengths of all existing works. Initial experiments indicate that the fused saliency methods generate results closer to the ground-truth than the original methods alone. We apply our method to content-based image retrieval, leveraging a fusion method as a feature extractor. We perform experimental evaluation and show a marked improvement in retrieval performance using our fusion method over individual saliency models.
本文通过融合现有显著性方法的结果,描述了一种新的视觉显著性模型。我们首先简要地调查了现有的显著性模型,并证明了融合方法,因为它们利用了所有现有作品的优势。初步实验表明,融合显著性方法产生的结果比单独的原始方法更接近真实情况。我们将该方法应用于基于内容的图像检索,利用融合方法作为特征提取器。我们进行了实验评估,并显示使用我们的融合方法在检索性能上显着改善了个体显著性模型。
{"title":"Better together: Fusing visual saliency methods for retrieving perceptually-similar images","authors":"Amanda Fernandez, Siwei Lyu","doi":"10.1109/ICCE.2015.7066502","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066502","url":null,"abstract":"In this paper, we describe a new model of visual saliency by fusing results from existing saliency methods. We first briefly survey existing saliency models, and justify the fusion methods as they take advantage of the strengths of all existing works. Initial experiments indicate that the fused saliency methods generate results closer to the ground-truth than the original methods alone. We apply our method to content-based image retrieval, leveraging a fusion method as a feature extractor. We perform experimental evaluation and show a marked improvement in retrieval performance using our fusion method over individual saliency models.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131918794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Improving the coding performance of 3D video using guided depth filtering 利用引导深度滤波提高3D视频的编码性能
Pub Date : 2015-03-26 DOI: 10.1109/ICCE.2015.7066318
S. V. Leuven, G. Wallendael, Robin Ballieul, J. D. Cock, R. Walle
Autostereoscopic displays visualize a 3D scene based on encoded texture and depth information, which often lack the quality of good depth maps. Therefore, display manufacturers introduce different filtering techniques to improve the subjective quality of the reconstructed 3D image. This paper investigates the coding performance when applying depth filtering in a pre-processing step. As an example, guided depth filtering is used at the encoder side, which results in a 1.7% coding gain for 3D-HEVC and 8.0% for Multiview HEVC. However, applying additional filtering at the decoder side might deteriorate the subjective quality. Therefore, adaptively filtering based on the applied pre-processor filter is suggested, which can be done using a supplemental enhancement information message. For natural content, a gain of 5.7% and 9.3% is reported using this approach.
自动立体显示器基于编码的纹理和深度信息可视化3D场景,这些信息通常缺乏良好深度图的质量。因此,显示器制造商引入了不同的滤波技术来提高重建三维图像的主观质量。本文研究了在预处理步骤中应用深度滤波对编码性能的影响。例如,在编码器侧使用引导深度滤波,这导致3D-HEVC的编码增益为1.7%,Multiview HEVC的编码增益为8.0%。然而,在解码器侧应用额外的滤波可能会降低主观质量。因此,建议在应用预处理器滤波器的基础上进行自适应滤波,这可以通过补充增强信息消息来实现。对于天然含量,使用该方法可获得5.7%和9.3%的增益。
{"title":"Improving the coding performance of 3D video using guided depth filtering","authors":"S. V. Leuven, G. Wallendael, Robin Ballieul, J. D. Cock, R. Walle","doi":"10.1109/ICCE.2015.7066318","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066318","url":null,"abstract":"Autostereoscopic displays visualize a 3D scene based on encoded texture and depth information, which often lack the quality of good depth maps. Therefore, display manufacturers introduce different filtering techniques to improve the subjective quality of the reconstructed 3D image. This paper investigates the coding performance when applying depth filtering in a pre-processing step. As an example, guided depth filtering is used at the encoder side, which results in a 1.7% coding gain for 3D-HEVC and 8.0% for Multiview HEVC. However, applying additional filtering at the decoder side might deteriorate the subjective quality. Therefore, adaptively filtering based on the applied pre-processor filter is suggested, which can be done using a supplemental enhancement information message. For natural content, a gain of 5.7% and 9.3% is reported using this approach.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131679941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Androi-based C&D (connected & downloadable) IVI(in-vehicle infotainment) platform 基于android的C&D(可连接和下载)IVI(车载信息娱乐)平台
Pub Date : 2015-03-26 DOI: 10.1109/ICCE.2015.7066494
P. Park, R. R. Igorevich, Daekyo Shin, Jongho Yoon
In accordance with the rapid increment of advanced In-Vehicle Infotainment (IVI) services, the MOST has been commercialized. The MOST provides S/W stacks for all layers, but these are full of paradoxical flaws because they are not based on the open standard S/W stack and thus are unfamiliar with IT-Automotive convergence S/W developers. To solve this problem and deploy the MOST more widely, the Android-based IVI platform is proposed and demonstrated in cooperation with the built-in MOST amplifier for commercial cars.
随着先进车载信息娱乐(IVI)服务的快速增长,MOST已经实现了商业化。MOST为所有层提供了S/W堆栈,但这些都充满了矛盾的缺陷,因为它们不是基于开放标准的S/W堆栈,因此不熟悉it -汽车融合S/W开发人员。为了解决这一问题并更广泛地部署MOST,提出了基于android的IVI平台,并与商用汽车的内置MOST放大器合作进行了演示。
{"title":"Androi-based C&D (connected & downloadable) IVI(in-vehicle infotainment) platform","authors":"P. Park, R. R. Igorevich, Daekyo Shin, Jongho Yoon","doi":"10.1109/ICCE.2015.7066494","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066494","url":null,"abstract":"In accordance with the rapid increment of advanced In-Vehicle Infotainment (IVI) services, the MOST has been commercialized. The MOST provides S/W stacks for all layers, but these are full of paradoxical flaws because they are not based on the open standard S/W stack and thus are unfamiliar with IT-Automotive convergence S/W developers. To solve this problem and deploy the MOST more widely, the Android-based IVI platform is proposed and demonstrated in cooperation with the built-in MOST amplifier for commercial cars.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122316813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2015 IEEE International Conference on Consumer Electronics (ICCE)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1