Pub Date : 2017-11-01DOI: 10.1109/ISPACS.2017.8266500
Qi Feng, Chenqiang Gao, Lan Wang, Minwen Zhang, Lian Du, Shiyu Qin
In recent years, the aging of population is one of the problems that many countries need to face. Along with the increasing proportion of elderly people living alone, there are more indoor but fatal accidents. Fall is one of these common and dangerous accidents for the elderly. Thus timely rescue after falls becomes particularly important, especially for elderly people who live alone. With the development of computer vision technology and the popularity of home surveillance, the fall detection algorithm based on video analysis provides a good solution to this problem. In this paper, we propose a new fall events detection algorithm. Our algorithm gets sub-motion history image by mapping faster R-CNN detected bounding boxes to motion history image, then extracts histogram of oriented gradient features, and finally uses support vector machine for fall classification. Proved by experiment, Our approach achieves very high recall rates and precision rates in a dataset of realistic image sequences of simulated falls and daily activities.
{"title":"Fall detection based on motion history image and histogram of oriented gradient feature","authors":"Qi Feng, Chenqiang Gao, Lan Wang, Minwen Zhang, Lian Du, Shiyu Qin","doi":"10.1109/ISPACS.2017.8266500","DOIUrl":"https://doi.org/10.1109/ISPACS.2017.8266500","url":null,"abstract":"In recent years, the aging of population is one of the problems that many countries need to face. Along with the increasing proportion of elderly people living alone, there are more indoor but fatal accidents. Fall is one of these common and dangerous accidents for the elderly. Thus timely rescue after falls becomes particularly important, especially for elderly people who live alone. With the development of computer vision technology and the popularity of home surveillance, the fall detection algorithm based on video analysis provides a good solution to this problem. In this paper, we propose a new fall events detection algorithm. Our algorithm gets sub-motion history image by mapping faster R-CNN detected bounding boxes to motion history image, then extracts histogram of oriented gradient features, and finally uses support vector machine for fall classification. Proved by experiment, Our approach achieves very high recall rates and precision rates in a dataset of realistic image sequences of simulated falls and daily activities.","PeriodicalId":166414,"journal":{"name":"2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122075745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ISPACS.2017.8266531
Hiromu Endo, A. Taguchi
This paper presents a new color image enhancement method. In color image processing, hue preserving is required. The proposed method is performed into the ideal HSI color space whose gamut is same as the RGB color space. The differential gray-level histogram equalization (DHE) is effective for the enhancement of gray scale images. The proposed method is an extension version of the DHE for color images, and furthermore, the enhancement degree is variable by introducing two parameters. Since our enhancement method is applied to not only intensity but also saturation, both the contrast and the colorfulness of the output image can be varied. We clear that the contrast and the colorfulness are controlled by the intensity enhancement method and the saturation enhancement method, respectively. The two enhancement methods are almost independent. Therefore, the intensity enhancement and the saturation enhancement are performed simultaneously, and can be changed the degree of emphasis by using two parameters.
{"title":"Color image enhancement method with adjustable emphasis degree","authors":"Hiromu Endo, A. Taguchi","doi":"10.1109/ISPACS.2017.8266531","DOIUrl":"https://doi.org/10.1109/ISPACS.2017.8266531","url":null,"abstract":"This paper presents a new color image enhancement method. In color image processing, hue preserving is required. The proposed method is performed into the ideal HSI color space whose gamut is same as the RGB color space. The differential gray-level histogram equalization (DHE) is effective for the enhancement of gray scale images. The proposed method is an extension version of the DHE for color images, and furthermore, the enhancement degree is variable by introducing two parameters. Since our enhancement method is applied to not only intensity but also saturation, both the contrast and the colorfulness of the output image can be varied. We clear that the contrast and the colorfulness are controlled by the intensity enhancement method and the saturation enhancement method, respectively. The two enhancement methods are almost independent. Therefore, the intensity enhancement and the saturation enhancement are performed simultaneously, and can be changed the degree of emphasis by using two parameters.","PeriodicalId":166414,"journal":{"name":"2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123402234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ISPACS.2017.8266599
Ying Hu, Fei Zhang, Chunguo Li, Yi Wang, Rui Zhao
In this paper, a downlink cell-free Massive MIMO system (CF-M-MIMO-S) is considered. The CF-M-MIMO-S is a distributed M-MIMO-S, where access points (APs) with very great quantity of antennas, and a much smaller number of independent users are distributed randomly. Firstly, an approximate expression for the capacity with perfect channel state information and conjugate beamforming scheme is derived. Secondly, an energy-efficient (EE) resource allocation strategy is advanced, which is aim to maximize system EE. Specifically, the power consumption include transmitting power, calculation power and circuit power. Simulation results indicate that the throughput of derived approximate expression is very close to theoretical value. It is also demonstrated The effectiveness of proposed algorithm and the trade-off between EE and the quantity of Aps is, meanwhile, the performance of throughput of the proposed algorithm is very well.
{"title":"Energy efficiency resource allocation in downlink cell-free massive MIMO system","authors":"Ying Hu, Fei Zhang, Chunguo Li, Yi Wang, Rui Zhao","doi":"10.1109/ISPACS.2017.8266599","DOIUrl":"https://doi.org/10.1109/ISPACS.2017.8266599","url":null,"abstract":"In this paper, a downlink cell-free Massive MIMO system (CF-M-MIMO-S) is considered. The CF-M-MIMO-S is a distributed M-MIMO-S, where access points (APs) with very great quantity of antennas, and a much smaller number of independent users are distributed randomly. Firstly, an approximate expression for the capacity with perfect channel state information and conjugate beamforming scheme is derived. Secondly, an energy-efficient (EE) resource allocation strategy is advanced, which is aim to maximize system EE. Specifically, the power consumption include transmitting power, calculation power and circuit power. Simulation results indicate that the throughput of derived approximate expression is very close to theoretical value. It is also demonstrated The effectiveness of proposed algorithm and the trade-off between EE and the quantity of Aps is, meanwhile, the performance of throughput of the proposed algorithm is very well.","PeriodicalId":166414,"journal":{"name":"2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123566755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ISPACS.2017.8266564
Yukun Dai, Zhiheng Zhou, Xi Chen, Yi Yang
Gesture recognition is a big area of artificial intelligence, gesture segmentation is the difficult problem of continuous vocabulary gesture recognition. There are many automatic techniques to segment gesture, however, most of them have an time interval between the gesture segmentation and output recognition results. The interval is not great for performance of continuous gesture recognition. In order to avoid the time interval, a novel method of continuous vocabulary gesture recognition is proposed. In our method, the start point and the end position of every gesture sequence are found by judging the change of the probability. The probability is the probability of gesture sequence occurrence that is defined by the gesture sequence in the Hidden Markov Model (HMM). We also propose a method to automatically determine the threshold used in the algorithm, which can effectively improve the segmentation accuracy and make the algorithm having better robustness. In the experiment, 93.88 % accuracy can be obtained to the gesture segmentation and 92.22 % accuracy can be obtained to the gesture recognition after segmented.
{"title":"A novel method for simultaneous gesture segmentation and recognition based on HMM","authors":"Yukun Dai, Zhiheng Zhou, Xi Chen, Yi Yang","doi":"10.1109/ISPACS.2017.8266564","DOIUrl":"https://doi.org/10.1109/ISPACS.2017.8266564","url":null,"abstract":"Gesture recognition is a big area of artificial intelligence, gesture segmentation is the difficult problem of continuous vocabulary gesture recognition. There are many automatic techniques to segment gesture, however, most of them have an time interval between the gesture segmentation and output recognition results. The interval is not great for performance of continuous gesture recognition. In order to avoid the time interval, a novel method of continuous vocabulary gesture recognition is proposed. In our method, the start point and the end position of every gesture sequence are found by judging the change of the probability. The probability is the probability of gesture sequence occurrence that is defined by the gesture sequence in the Hidden Markov Model (HMM). We also propose a method to automatically determine the threshold used in the algorithm, which can effectively improve the segmentation accuracy and make the algorithm having better robustness. In the experiment, 93.88 % accuracy can be obtained to the gesture segmentation and 92.22 % accuracy can be obtained to the gesture recognition after segmented.","PeriodicalId":166414,"journal":{"name":"2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131701820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ISPACS.2017.8266551
Yuki Watanabe, Koken Chin, Hiroyuki Tsuchiya, H. San, T. Matsuura, M. Hotta
This paper presents a reconfigurable non-binary cyclic analog-to-digital converter (ADC) which can achieve different resolution at different sampling frequency with the same analog conversion stage. The conversion resolution (bit number) of ADC can be increased with more conversion steps in the conventional cyclic manner; and the conversion speed of the cyclic ADC can be enhanced by our proposed multi-rate clock operation mode. The prototype ADC has been designed and fabricated in TSMC 90nm CMOS technology. The measured results of the proposed experimental ADC demonstrate that ENOB=12.42bit is achieved in conventional cyclic ADC mode while Fs=470kHz, and ENOB=9.96bit is achieved in our proposed multi-rate clock mode while Fs=889kHz with the same analog conversion stage and the simple radix-value estimation technique.
{"title":"Experimental results of reconfigurable non-binary cyclic ADC","authors":"Yuki Watanabe, Koken Chin, Hiroyuki Tsuchiya, H. San, T. Matsuura, M. Hotta","doi":"10.1109/ISPACS.2017.8266551","DOIUrl":"https://doi.org/10.1109/ISPACS.2017.8266551","url":null,"abstract":"This paper presents a reconfigurable non-binary cyclic analog-to-digital converter (ADC) which can achieve different resolution at different sampling frequency with the same analog conversion stage. The conversion resolution (bit number) of ADC can be increased with more conversion steps in the conventional cyclic manner; and the conversion speed of the cyclic ADC can be enhanced by our proposed multi-rate clock operation mode. The prototype ADC has been designed and fabricated in TSMC 90nm CMOS technology. The measured results of the proposed experimental ADC demonstrate that ENOB=12.42bit is achieved in conventional cyclic ADC mode while Fs=470kHz, and ENOB=9.96bit is achieved in our proposed multi-rate clock mode while Fs=889kHz with the same analog conversion stage and the simple radix-value estimation technique.","PeriodicalId":166414,"journal":{"name":"2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130416452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ISPACS.2017.8266495
En Yu, Jiande Sun, Li Wang, Huaxiang Zhang, Jing Li
With the explosive growth of the multi-media data, the cross-media retrieval technology has drawn much attention. Previous methods usually used the 12-norm for the regularization constraint when learning the projection matrices, which can' use the informative and discriminative features to reach the better performance. In this paper, we propose the coupled feature selection model for cross-media retrieval(CFSCR) based on the modality-dependent method. In details, the proposée framework learns two couples of projection matrices for two retrieval sub-tasks(I2T and T2I), and uses the 12ji-nom for coupled feature selection when learning the mapping matrices, which not only considers the the measure of relevan« but also aims to select informative and discriminative feature; from image and text feature spaces. Experiment results or three different dataseis demonstrate that our method perform: better than the state-of-the-art methods.
{"title":"Coupled feature selection for modality-dependent cross-media retrieval","authors":"En Yu, Jiande Sun, Li Wang, Huaxiang Zhang, Jing Li","doi":"10.1109/ISPACS.2017.8266495","DOIUrl":"https://doi.org/10.1109/ISPACS.2017.8266495","url":null,"abstract":"With the explosive growth of the multi-media data, the cross-media retrieval technology has drawn much attention. Previous methods usually used the 12-norm for the regularization constraint when learning the projection matrices, which can' use the informative and discriminative features to reach the better performance. In this paper, we propose the coupled feature selection model for cross-media retrieval(CFSCR) based on the modality-dependent method. In details, the proposée framework learns two couples of projection matrices for two retrieval sub-tasks(I2T and T2I), and uses the 12ji-nom for coupled feature selection when learning the mapping matrices, which not only considers the the measure of relevan« but also aims to select informative and discriminative feature; from image and text feature spaces. Experiment results or three different dataseis demonstrate that our method perform: better than the state-of-the-art methods.","PeriodicalId":166414,"journal":{"name":"2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","volume":"77 4 Pt 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130554475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ISPACS.2017.8266461
Wenxin Yu, Hao Sun, Gang He, Zhiqiang Zhang
H.264/AVC has been deployed in various multimedia communication systems such as mobile TV and Internet video streaming systems. However, it suffers from various packet losses in transmission. Temporal error concealment is a kind of decoder-based error resilient solution without transmission delay. In this paper, a multi-step temporal error concealment scheme is proposed to improve the error concealment quality. The 4×4 sub-block partition is adopted as the motion vector recovery unit and motion vector recovery is done for edge motion vectors. An early termination method is also used to reduce some part of computation complexity. Furthermore, a weighted boundary matching algorithm is proposed to avoid error propagation. The experimental results show that our proposed scheme can obtain better video quality than conventional approaches.
{"title":"A multi-step temporal error concealment method","authors":"Wenxin Yu, Hao Sun, Gang He, Zhiqiang Zhang","doi":"10.1109/ISPACS.2017.8266461","DOIUrl":"https://doi.org/10.1109/ISPACS.2017.8266461","url":null,"abstract":"H.264/AVC has been deployed in various multimedia communication systems such as mobile TV and Internet video streaming systems. However, it suffers from various packet losses in transmission. Temporal error concealment is a kind of decoder-based error resilient solution without transmission delay. In this paper, a multi-step temporal error concealment scheme is proposed to improve the error concealment quality. The 4×4 sub-block partition is adopted as the motion vector recovery unit and motion vector recovery is done for edge motion vectors. An early termination method is also used to reduce some part of computation complexity. Furthermore, a weighted boundary matching algorithm is proposed to avoid error propagation. The experimental results show that our proposed scheme can obtain better video quality than conventional approaches.","PeriodicalId":166414,"journal":{"name":"2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134098564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ISPACS.2017.8266583
Xueyang Fu, Zhiwen Fan, Mei Ling, Yue Huang, Xinghao Ding
Underwater images often suffer from color shift and contrast degradation due to the absorption and scattering of light while traveling in water. In order to handle these issues, we present and solve two sub-problems to improve underwater image quality. First, we introduce an effective color correcting strategy based on piece-wise linear transformation to address the color distortion. Then we discuss a novel optimal contrast improvement method, which is efficient and can reduce artifacts, to address the low contrast. Since most operations are pixel-wise calculations, the proposed method is straightforward to implement and appropriate for real-time application. In addition, prior knowledge about imaging conditions is not required. Experiments show an improvement in the enhanced image of color, contrast, naturalness and object prominence.
{"title":"Two-step approach for single underwater image enhancement","authors":"Xueyang Fu, Zhiwen Fan, Mei Ling, Yue Huang, Xinghao Ding","doi":"10.1109/ISPACS.2017.8266583","DOIUrl":"https://doi.org/10.1109/ISPACS.2017.8266583","url":null,"abstract":"Underwater images often suffer from color shift and contrast degradation due to the absorption and scattering of light while traveling in water. In order to handle these issues, we present and solve two sub-problems to improve underwater image quality. First, we introduce an effective color correcting strategy based on piece-wise linear transformation to address the color distortion. Then we discuss a novel optimal contrast improvement method, which is efficient and can reduce artifacts, to address the low contrast. Since most operations are pixel-wise calculations, the proposed method is straightforward to implement and appropriate for real-time application. In addition, prior knowledge about imaging conditions is not required. Experiments show an improvement in the enhanced image of color, contrast, naturalness and object prominence.","PeriodicalId":166414,"journal":{"name":"2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132848828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ISPACS.2017.8266565
Huaiye Luo, Bo Li, Zhiheng Zhou
The motion detection approach plays a crucial role in the intelligent video surveillance technology. A universal background subtraction algorithm called PAWCS (Pixel-based Adaptive Word Consensus Segmenter), based on word consensus models, is proven that it performs better in video motion detection recently. In this paper, we present an algorithm to improve the robustness of PAWCS. Specifically, the background models' update can be inhibited when the pixels locate in the edge of foreground objects. Then, the bi-updating approach is used in the models updating strategy, and the persistence of the word will be updated according to their matching accuracy. Finally, the experiments' results demonstrate the effectiveness of our method.
运动检测方法在智能视频监控技术中起着至关重要的作用。基于词一致性模型的通用背景减法算法PAWCS (Pixel-based Adaptive Word Consensus Segmenter)在视频运动检测中表现较好。本文提出了一种提高PAWCS鲁棒性的算法。具体来说,当像素位于前景物体的边缘时,可以抑制背景模型的更新。然后,在模型更新策略中采用双更新方法,根据匹配精度对词的持久性进行更新。最后,通过实验验证了该方法的有效性。
{"title":"Improved background subtraction based on word consensus models","authors":"Huaiye Luo, Bo Li, Zhiheng Zhou","doi":"10.1109/ISPACS.2017.8266565","DOIUrl":"https://doi.org/10.1109/ISPACS.2017.8266565","url":null,"abstract":"The motion detection approach plays a crucial role in the intelligent video surveillance technology. A universal background subtraction algorithm called PAWCS (Pixel-based Adaptive Word Consensus Segmenter), based on word consensus models, is proven that it performs better in video motion detection recently. In this paper, we present an algorithm to improve the robustness of PAWCS. Specifically, the background models' update can be inhibited when the pixels locate in the edge of foreground objects. Then, the bi-updating approach is used in the models updating strategy, and the persistence of the word will be updated according to their matching accuracy. Finally, the experiments' results demonstrate the effectiveness of our method.","PeriodicalId":166414,"journal":{"name":"2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","volume":"187 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134530887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ISPACS.2017.8266540
Jhih-You Deng, J. Chiang
High dynamic range (HDR) images attract growing attention in many practical applications by offering an extended dynamic range, and an improved visual experience accordingly. Because of the expense and rarity of HDR cameras, many studies generate HDR images using several low dynamic range (LDR) images with different exposures. In this paper, we propose a technique to compress the multi-exposure images in an efficient way. Then HDR image generation, as well as multi-exposure fusion can be realized in the decoder. The multi-exposure images are encoded by MV-HEVC and the inter-view redundancy will be well exploited by converting the intensity of the reconstructed base view with the help of an accurate intensity-mapping function. Compared to the scenario by encoding the generated HDR image with HEVC range extension, experimental results show that the proposed technique achieves better coding efficiency.
{"title":"Multi-exposure images coding for efficient high dynamic range image compression","authors":"Jhih-You Deng, J. Chiang","doi":"10.1109/ISPACS.2017.8266540","DOIUrl":"https://doi.org/10.1109/ISPACS.2017.8266540","url":null,"abstract":"High dynamic range (HDR) images attract growing attention in many practical applications by offering an extended dynamic range, and an improved visual experience accordingly. Because of the expense and rarity of HDR cameras, many studies generate HDR images using several low dynamic range (LDR) images with different exposures. In this paper, we propose a technique to compress the multi-exposure images in an efficient way. Then HDR image generation, as well as multi-exposure fusion can be realized in the decoder. The multi-exposure images are encoded by MV-HEVC and the inter-view redundancy will be well exploited by converting the intensity of the reconstructed base view with the help of an accurate intensity-mapping function. Compared to the scenario by encoding the generated HDR image with HEVC range extension, experimental results show that the proposed technique achieves better coding efficiency.","PeriodicalId":166414,"journal":{"name":"2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134531605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}