Pub Date : 2015-03-26DOI: 10.1109/ICCE.2015.7066347
K. Yun, W. Cheong, Jin Young Lee, Kyuheon Kim, Gwangsoon Lee
This paper introduces a hybrid architecture for efficient 3D video transmission on a legacy DTV channel and IP network. The hybrid architecture specifically includes a robust synchronization method on heterogeneous networks, adaptive streaming of the 3D additional view by the ISO/IEC 23009-1 DASH and transport stream system target decoder (T-STD) model for stable playback of both views. Based on experimental results, we confirm that the proposed architecture can be used as a core technology in hybrid 3DTV broadcasting and a reference model for development of various hybrid services.
{"title":"A hybrid architecture based on TS and HTTP for real-time 3D video transmission","authors":"K. Yun, W. Cheong, Jin Young Lee, Kyuheon Kim, Gwangsoon Lee","doi":"10.1109/ICCE.2015.7066347","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066347","url":null,"abstract":"This paper introduces a hybrid architecture for efficient 3D video transmission on a legacy DTV channel and IP network. The hybrid architecture specifically includes a robust synchronization method on heterogeneous networks, adaptive streaming of the 3D additional view by the ISO/IEC 23009-1 DASH and transport stream system target decoder (T-STD) model for stable playback of both views. Based on experimental results, we confirm that the proposed architecture can be used as a core technology in hybrid 3DTV broadcasting and a reference model for development of various hybrid services.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114805439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-26DOI: 10.1109/ICCE.2015.7066560
T. Nishio, M. Morikura, Koji Yamamoto
Many media exist for communication, such as LTE, IEEE 802.11 wireless local area networks (WLANs), millimeter-wave communications, and visible light communications (VLCs), and a lot of research has been conducted to find methods to improve the performance of each medium. However, the use of a single medium for communication limits the performance upper bound that can be achieved by using more than one medium for communication. Moreover, some media are widely used, while others are not because their use cases are limited. Therefore, the more commonly used media still suffer from a lack of bandwidth, while bandwidth for other media types is abundant. In this paper, we propose a heterogeneous media communications (HeMCOM) framework, where multiple media are used for leveraging the abundant bandwidth and increasing the total communication performance. HeMCOM focuses on leveraging the difference of the PHY and MAC characteristics of each medium. This paper summarizes the HeMCOM concept, introduces related works from the point of view of this concept, and discusses the possibility of using several types of media.
{"title":"Heterogeneous media communications for future wireless local area networks","authors":"T. Nishio, M. Morikura, Koji Yamamoto","doi":"10.1109/ICCE.2015.7066560","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066560","url":null,"abstract":"Many media exist for communication, such as LTE, IEEE 802.11 wireless local area networks (WLANs), millimeter-wave communications, and visible light communications (VLCs), and a lot of research has been conducted to find methods to improve the performance of each medium. However, the use of a single medium for communication limits the performance upper bound that can be achieved by using more than one medium for communication. Moreover, some media are widely used, while others are not because their use cases are limited. Therefore, the more commonly used media still suffer from a lack of bandwidth, while bandwidth for other media types is abundant. In this paper, we propose a heterogeneous media communications (HeMCOM) framework, where multiple media are used for leveraging the abundant bandwidth and increasing the total communication performance. HeMCOM focuses on leveraging the difference of the PHY and MAC characteristics of each medium. This paper summarizes the HeMCOM concept, introduces related works from the point of view of this concept, and discusses the possibility of using several types of media.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127060996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-26DOI: 10.1109/ICCE.2015.7066419
R. Gulati, V. Easwaran, P. Karandikar, Mihir Mody, Prithvi Shankar
Nowadays it has become common practice to use multi core SoCs in safety related Advanced Driver Assistance Systems (ADAS). The ISO 26262 functional safety standard provides requirements to avoid or reduce the risk caused by these systems. In safety related systems, a comprehensive test strategy is required to guarantee successful normal operation for the SoC throughout its life cycle. Software based self-tests have been proposed as an effective alternative to hardware based self-tests in order to eliminate area and save new hardware IP development costs. This paper proposes software based self-test scheme to ensure integrity of Imaging subsystems to prevent violation of the defined safety goals for several camera based ADAS applications. The proposal uses a hand crafted functional time triggered non-concurrent online test based solution, The proposed solution covers permanent and intermittent faults in imaging sub-systems by introducing a known golden reference image processing run every fault tolerant time interval. For a sample 1080p30 input capture, considering a fault tolerant time interval of 300ms for a typical ADAS application and considering that this hand-crafted test pattern is run after every 8 frames, the proposed solution enables the hardware self-test at an additional 12.5% clocking requirement for the imaging sub-system and an additional 12.5% DDR throughput requirement.
{"title":"Resolving ADAS imaging subsystem functional safety quagmire","authors":"R. Gulati, V. Easwaran, P. Karandikar, Mihir Mody, Prithvi Shankar","doi":"10.1109/ICCE.2015.7066419","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066419","url":null,"abstract":"Nowadays it has become common practice to use multi core SoCs in safety related Advanced Driver Assistance Systems (ADAS). The ISO 26262 functional safety standard provides requirements to avoid or reduce the risk caused by these systems. In safety related systems, a comprehensive test strategy is required to guarantee successful normal operation for the SoC throughout its life cycle. Software based self-tests have been proposed as an effective alternative to hardware based self-tests in order to eliminate area and save new hardware IP development costs. This paper proposes software based self-test scheme to ensure integrity of Imaging subsystems to prevent violation of the defined safety goals for several camera based ADAS applications. The proposal uses a hand crafted functional time triggered non-concurrent online test based solution, The proposed solution covers permanent and intermittent faults in imaging sub-systems by introducing a known golden reference image processing run every fault tolerant time interval. For a sample 1080p30 input capture, considering a fault tolerant time interval of 300ms for a typical ADAS application and considering that this hand-crafted test pattern is run after every 8 frames, the proposed solution enables the hardware self-test at an additional 12.5% clocking requirement for the imaging sub-system and an additional 12.5% DDR throughput requirement.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126182402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-26DOI: 10.1109/ICCE.2015.7066433
Zaur Fataliyev, D. Han, Y. Imamverdiyev, Hanseok Ko
This paper proposes a novel fusion method for summarization of surveillance videos based on extracted key positions of spotted objects in observed area. Accumulated energy of object is calculated by analyzing its motion pattern for key position extraction. The method allows to summarize long videos into a single index frame. Experimental results demonstrate its effectiveness.
{"title":"Video summarization based on extracted key position of spotted objects","authors":"Zaur Fataliyev, D. Han, Y. Imamverdiyev, Hanseok Ko","doi":"10.1109/ICCE.2015.7066433","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066433","url":null,"abstract":"This paper proposes a novel fusion method for summarization of surveillance videos based on extracted key positions of spotted objects in observed area. Accumulated energy of object is calculated by analyzing its motion pattern for key position extraction. The method allows to summarize long videos into a single index frame. Experimental results demonstrate its effectiveness.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126978559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-26DOI: 10.1109/ICCE.2015.7066396
R. Valente, Waldir Sabino da Silva, V. Lucena
This paper presents an effective system for dynamic integration of appliances into a ZigBee home network through home gateways based on Web services. The proposed architecture will be described along the paper. The resulting system has been implemented and was used in experiments in a home network test bed in order to prove its feasibility and effectiveness. The obtained results are promising. This new architecture is expected to contribute to the development of ubiquitous service systems for home network domains using consumer electronic devices.
{"title":"Dynamic integration of appliances into ZigBee home networks through web services","authors":"R. Valente, Waldir Sabino da Silva, V. Lucena","doi":"10.1109/ICCE.2015.7066396","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066396","url":null,"abstract":"This paper presents an effective system for dynamic integration of appliances into a ZigBee home network through home gateways based on Web services. The proposed architecture will be described along the paper. The resulting system has been implemented and was used in experiments in a home network test bed in order to prove its feasibility and effectiveness. The obtained results are promising. This new architecture is expected to contribute to the development of ubiquitous service systems for home network domains using consumer electronic devices.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124849852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-26DOI: 10.1109/ICCE.2015.7066464
Luong Pham Van, J. D. Praeter, G. Wallendael, J. D. Cock, R. Walle
In this paper, we propose a machine learning based transcoding scheme for arbitrarily downsizing a pre-encoded High Efficiency Video Coding video. The spatial scaling factor can be freely selected to adapt the output bit rate to the bandwidth of the network. Furthermore, machine learning techniques can exploit the correlation between input and output coding information to predict the split-flag of coding units in a P-frame. We analyzed the performance of both offline and online training in the learning phase of transcoding. The experimental results show that the proposed techniques significantly reduce the transcoding complexity and achieve trade-offs between coding performance and complexity. In addition, we demonstrate that online training performs better than offline training.
{"title":"Machine learning for arbitrary downsizing of pre-encoded video in HEVC","authors":"Luong Pham Van, J. D. Praeter, G. Wallendael, J. D. Cock, R. Walle","doi":"10.1109/ICCE.2015.7066464","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066464","url":null,"abstract":"In this paper, we propose a machine learning based transcoding scheme for arbitrarily downsizing a pre-encoded High Efficiency Video Coding video. The spatial scaling factor can be freely selected to adapt the output bit rate to the bandwidth of the network. Furthermore, machine learning techniques can exploit the correlation between input and output coding information to predict the split-flag of coding units in a P-frame. We analyzed the performance of both offline and online training in the learning phase of transcoding. The experimental results show that the proposed techniques significantly reduce the transcoding complexity and achieve trade-offs between coding performance and complexity. In addition, we demonstrate that online training performs better than offline training.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125000696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-26DOI: 10.1109/ICCE.2015.7066339
D. Z. Rodríguez, R. L. Rosa, G. Bressan
This work proposes a no-reference video quality metric that considers two parameters, pauses and changes in video resolution. Results indicate that users' Quality-of-Experience (QoE) is highly correlated with these parameters. The proposed metric has low complexity because it is based on application-level parameters; it can, therefore, be easily implemented in consumer electronic devices.
{"title":"No-reference video quality metric for streaming service using DASH standard","authors":"D. Z. Rodríguez, R. L. Rosa, G. Bressan","doi":"10.1109/ICCE.2015.7066339","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066339","url":null,"abstract":"This work proposes a no-reference video quality metric that considers two parameters, pauses and changes in video resolution. Results indicate that users' Quality-of-Experience (QoE) is highly correlated with these parameters. The proposed metric has low complexity because it is based on application-level parameters; it can, therefore, be easily implemented in consumer electronic devices.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123252797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-26DOI: 10.1109/ICCE.2015.7066502
Amanda Fernandez, Siwei Lyu
In this paper, we describe a new model of visual saliency by fusing results from existing saliency methods. We first briefly survey existing saliency models, and justify the fusion methods as they take advantage of the strengths of all existing works. Initial experiments indicate that the fused saliency methods generate results closer to the ground-truth than the original methods alone. We apply our method to content-based image retrieval, leveraging a fusion method as a feature extractor. We perform experimental evaluation and show a marked improvement in retrieval performance using our fusion method over individual saliency models.
{"title":"Better together: Fusing visual saliency methods for retrieving perceptually-similar images","authors":"Amanda Fernandez, Siwei Lyu","doi":"10.1109/ICCE.2015.7066502","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066502","url":null,"abstract":"In this paper, we describe a new model of visual saliency by fusing results from existing saliency methods. We first briefly survey existing saliency models, and justify the fusion methods as they take advantage of the strengths of all existing works. Initial experiments indicate that the fused saliency methods generate results closer to the ground-truth than the original methods alone. We apply our method to content-based image retrieval, leveraging a fusion method as a feature extractor. We perform experimental evaluation and show a marked improvement in retrieval performance using our fusion method over individual saliency models.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131918794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-26DOI: 10.1109/ICCE.2015.7066318
S. V. Leuven, G. Wallendael, Robin Ballieul, J. D. Cock, R. Walle
Autostereoscopic displays visualize a 3D scene based on encoded texture and depth information, which often lack the quality of good depth maps. Therefore, display manufacturers introduce different filtering techniques to improve the subjective quality of the reconstructed 3D image. This paper investigates the coding performance when applying depth filtering in a pre-processing step. As an example, guided depth filtering is used at the encoder side, which results in a 1.7% coding gain for 3D-HEVC and 8.0% for Multiview HEVC. However, applying additional filtering at the decoder side might deteriorate the subjective quality. Therefore, adaptively filtering based on the applied pre-processor filter is suggested, which can be done using a supplemental enhancement information message. For natural content, a gain of 5.7% and 9.3% is reported using this approach.
{"title":"Improving the coding performance of 3D video using guided depth filtering","authors":"S. V. Leuven, G. Wallendael, Robin Ballieul, J. D. Cock, R. Walle","doi":"10.1109/ICCE.2015.7066318","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066318","url":null,"abstract":"Autostereoscopic displays visualize a 3D scene based on encoded texture and depth information, which often lack the quality of good depth maps. Therefore, display manufacturers introduce different filtering techniques to improve the subjective quality of the reconstructed 3D image. This paper investigates the coding performance when applying depth filtering in a pre-processing step. As an example, guided depth filtering is used at the encoder side, which results in a 1.7% coding gain for 3D-HEVC and 8.0% for Multiview HEVC. However, applying additional filtering at the decoder side might deteriorate the subjective quality. Therefore, adaptively filtering based on the applied pre-processor filter is suggested, which can be done using a supplemental enhancement information message. For natural content, a gain of 5.7% and 9.3% is reported using this approach.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131679941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-26DOI: 10.1109/ICCE.2015.7066494
P. Park, R. R. Igorevich, Daekyo Shin, Jongho Yoon
In accordance with the rapid increment of advanced In-Vehicle Infotainment (IVI) services, the MOST has been commercialized. The MOST provides S/W stacks for all layers, but these are full of paradoxical flaws because they are not based on the open standard S/W stack and thus are unfamiliar with IT-Automotive convergence S/W developers. To solve this problem and deploy the MOST more widely, the Android-based IVI platform is proposed and demonstrated in cooperation with the built-in MOST amplifier for commercial cars.
{"title":"Androi-based C&D (connected & downloadable) IVI(in-vehicle infotainment) platform","authors":"P. Park, R. R. Igorevich, Daekyo Shin, Jongho Yoon","doi":"10.1109/ICCE.2015.7066494","DOIUrl":"https://doi.org/10.1109/ICCE.2015.7066494","url":null,"abstract":"In accordance with the rapid increment of advanced In-Vehicle Infotainment (IVI) services, the MOST has been commercialized. The MOST provides S/W stacks for all layers, but these are full of paradoxical flaws because they are not based on the open standard S/W stack and thus are unfamiliar with IT-Automotive convergence S/W developers. To solve this problem and deploy the MOST more widely, the Android-based IVI platform is proposed and demonstrated in cooperation with the built-in MOST amplifier for commercial cars.","PeriodicalId":169402,"journal":{"name":"2015 IEEE International Conference on Consumer Electronics (ICCE)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122316813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}