Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820693
Lijuan Tang, Xiongkuo Min, V. Jakhetiya, Ke Gu, Xinfeng Zhang, Shuai Yang
To blindly evaluate the visual quality of image is of great importance in many image processing and computer vision applications. In this paper, we develop a novel training-free no-reference (NR) quality metric (QM) based on a unified brain theory, namely, free energy principle. The free energy principle tells that there always exists a difference between an input true visual signal and its processed one by human brain. The difference encompasses the “surprising” information between the real and processed signals. This difference has been found to be highly related to visual quality and attention. More specifically, given a distorted image signal, we first compute the aforesaid difference to approximate its visual quality and saliency via a semi-parametric method that is constructed by combining bilateral filter and auto-regression model. Afterwards, the computed visual saliency and a new natural scene statistic (NSS) model are used for modification to infer the final visual quality score. Extensive experiments are conducted on popular natural scene image databases and a recently released screen content image database for performance comparison. Results have proved the effectiveness of the proposed blind quality measure compared with classical and state-of-the-art full- and no-reference QMs.
{"title":"No-reference quality assessment for image sharpness and noise","authors":"Lijuan Tang, Xiongkuo Min, V. Jakhetiya, Ke Gu, Xinfeng Zhang, Shuai Yang","doi":"10.1109/APSIPA.2016.7820693","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820693","url":null,"abstract":"To blindly evaluate the visual quality of image is of great importance in many image processing and computer vision applications. In this paper, we develop a novel training-free no-reference (NR) quality metric (QM) based on a unified brain theory, namely, free energy principle. The free energy principle tells that there always exists a difference between an input true visual signal and its processed one by human brain. The difference encompasses the “surprising” information between the real and processed signals. This difference has been found to be highly related to visual quality and attention. More specifically, given a distorted image signal, we first compute the aforesaid difference to approximate its visual quality and saliency via a semi-parametric method that is constructed by combining bilateral filter and auto-regression model. Afterwards, the computed visual saliency and a new natural scene statistic (NSS) model are used for modification to infer the final visual quality score. Extensive experiments are conducted on popular natural scene image databases and a recently released screen content image database for performance comparison. Results have proved the effectiveness of the proposed blind quality measure compared with classical and state-of-the-art full- and no-reference QMs.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122757284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the effectiveness of using convolutional neural networks (CNNs) to automatically detect abnormal heart and lung sounds and classify them into different classes in this paper. Heart and respiratory diseases have been affecting humankind for a long time. An effective and automatic diagnostic method is highly attractive since it can help discover potential threat at the early stage, even at home without a professional doctor. We collected a data set containing normal and abnormal heart and lung sounds. These sounds were then annotated by professional doctors. CNNs based systems were implemented to automatically classify the heart sounds into one of the seven categories: normal, bruit de galop, mitral inadequacy, mitral stenosis, interventricular septal defect (IVSD), aortic incompetence, aorta stenosis, and the lung sounds into one of the three categories: normal, moist rales, wheezing rale.
{"title":"Automatic heart and lung sounds classification using convolutional neural networks","authors":"Qiyu Chen, Weibin Zhang, Xiang Tian, Xiaoxue Zhang, Shaoqiong Chen, Wenkang Lei","doi":"10.1109/APSIPA.2016.7820741","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820741","url":null,"abstract":"We study the effectiveness of using convolutional neural networks (CNNs) to automatically detect abnormal heart and lung sounds and classify them into different classes in this paper. Heart and respiratory diseases have been affecting humankind for a long time. An effective and automatic diagnostic method is highly attractive since it can help discover potential threat at the early stage, even at home without a professional doctor. We collected a data set containing normal and abnormal heart and lung sounds. These sounds were then annotated by professional doctors. CNNs based systems were implemented to automatically classify the heart sounds into one of the seven categories: normal, bruit de galop, mitral inadequacy, mitral stenosis, interventricular septal defect (IVSD), aortic incompetence, aorta stenosis, and the lung sounds into one of the three categories: normal, moist rales, wheezing rale.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"401 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122787932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820784
S. Han, Insung Hwang, Sang Hwa Lee, N. Cho
This paper proposes a gaze estimation algorithm using 3-D eyeball model and eyelid shape. The gaze estimation suffers from differences of eye shapes and individual behaviors, and requires user-specific gaze calibration. The proposed method exploits the usual 3-D eyeball model and shapes of the eyelid to estimate gaze without user-specific calibration and learning. Since the gaze is closely related to the 3-D rotation of eyeball, this paper first derives the relation between 2-D pupil location extracted in the eye image and 3-D rotation of eyeball. This paper also models the shapes of the eyelid to adjust gaze based on the observation that the shapes of the eyelid are deformed with respect to the gaze. This paper models the curvature of eyelid curve to compensate for the gaze. According to the various experiments, the proposed method shows good results in gaze estimation. The proposed method does not need user-specific calibration or gaze learning since the general 3-D eyeball and eyelid models are exploited in the localized eye region. Therefore, it is expected that the proposed gaze estimation algorithm is suitable for various applications such as VR/AR devices, driver gaze tracking, gaze-based interfaces, and so on.
{"title":"Gaze estimation using 3-D eyeball model and eyelid shapes","authors":"S. Han, Insung Hwang, Sang Hwa Lee, N. Cho","doi":"10.1109/APSIPA.2016.7820784","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820784","url":null,"abstract":"This paper proposes a gaze estimation algorithm using 3-D eyeball model and eyelid shape. The gaze estimation suffers from differences of eye shapes and individual behaviors, and requires user-specific gaze calibration. The proposed method exploits the usual 3-D eyeball model and shapes of the eyelid to estimate gaze without user-specific calibration and learning. Since the gaze is closely related to the 3-D rotation of eyeball, this paper first derives the relation between 2-D pupil location extracted in the eye image and 3-D rotation of eyeball. This paper also models the shapes of the eyelid to adjust gaze based on the observation that the shapes of the eyelid are deformed with respect to the gaze. This paper models the curvature of eyelid curve to compensate for the gaze. According to the various experiments, the proposed method shows good results in gaze estimation. The proposed method does not need user-specific calibration or gaze learning since the general 3-D eyeball and eyelid models are exploited in the localized eye region. Therefore, it is expected that the proposed gaze estimation algorithm is suitable for various applications such as VR/AR devices, driver gaze tracking, gaze-based interfaces, and so on.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122831271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820685
Shih-Chung Hsu, Chung-Lin Huang
Vehicle verification in two different views can be applied for Intelligent Transportation System. However, object appearance matching in two different views is difficult. The vehicle images captured in two views are represented as a feature pair which can be classified as the same/different pair. Sparse representation (SR) has been applied for reconstruction, recognition, and verification. However, the SR dictionary may not guarantee feature sparsity and effective representation. In the paper, we propose Boost-KSVD method without using initial random atom to generate the SR dictionary which can be applied for object verification with very good accuracy. Then, we develop a discriminative criterion to decide the SR dictionary size. Finally, the experiments show that our method can generate better verification accuracy compared with the other methods.
{"title":"Vehicle verification in two nonoverlapped views using sparse representation","authors":"Shih-Chung Hsu, Chung-Lin Huang","doi":"10.1109/APSIPA.2016.7820685","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820685","url":null,"abstract":"Vehicle verification in two different views can be applied for Intelligent Transportation System. However, object appearance matching in two different views is difficult. The vehicle images captured in two views are represented as a feature pair which can be classified as the same/different pair. Sparse representation (SR) has been applied for reconstruction, recognition, and verification. However, the SR dictionary may not guarantee feature sparsity and effective representation. In the paper, we propose Boost-KSVD method without using initial random atom to generate the SR dictionary which can be applied for object verification with very good accuracy. Then, we develop a discriminative criterion to decide the SR dictionary size. Finally, the experiments show that our method can generate better verification accuracy compared with the other methods.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129412511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820818
Shan Yang, Zhizheng Wu, Lei Xie
Adaptability and controllability are the major advantages of statistical parametric speech synthesis (SPSS) over unit-selection synthesis. Recently, deep neural networks (DNNs) have significantly improved the performance of SPSS. However, current studies are mainly focusing on the training of speaker-dependent DNNs, which generally requires a significant amount of data from a single speaker. In this work, we perform a systematic analysis of the training of multi-speaker average voice model (AVM), which is the foundation of adaptability and controllability of a DNN-based speech synthesis system. Specifically, we employ the i-vector framework to factorise the speaker specific information, which allows a variety of speakers to share all the hidden layers. And the speaker identity vector is augmented with linguistic features in the DNN input. We systematically analyse the impact of the implementations of i-vectors and speaker normalisation.
{"title":"On the training of DNN-based average voice model for speech synthesis","authors":"Shan Yang, Zhizheng Wu, Lei Xie","doi":"10.1109/APSIPA.2016.7820818","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820818","url":null,"abstract":"Adaptability and controllability are the major advantages of statistical parametric speech synthesis (SPSS) over unit-selection synthesis. Recently, deep neural networks (DNNs) have significantly improved the performance of SPSS. However, current studies are mainly focusing on the training of speaker-dependent DNNs, which generally requires a significant amount of data from a single speaker. In this work, we perform a systematic analysis of the training of multi-speaker average voice model (AVM), which is the foundation of adaptability and controllability of a DNN-based speech synthesis system. Specifically, we employ the i-vector framework to factorise the speaker specific information, which allows a variety of speakers to share all the hidden layers. And the speaker identity vector is augmented with linguistic features in the DNN input. We systematically analyse the impact of the implementations of i-vectors and speaker normalisation.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"70 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114105486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820734
Takumi Kodama, S. Makino, Tomasz M. Rutkowski
In this study we propose a novel stimulus-driven brain-computer interface (BCI) paradigm, which generates control commands based on classification of somatosensory modality P300 responses. Six spatial vibrotactile stimulus patterns are applied to entire back and limbs of a user. The aim of the current project is to validate an effectiveness of the vibrotactile stimulus patterns for BCI purposes and to establish a novel concept of tactile modality communication link, which shall help locked-in syndrome (LIS) patients, who lose their sight and hearing due to sensory disabilities. We define this approach as a full-body BCI (fbBCI) and we conduct psychophysical stimulus evaluation and realtime EEG response classification experiments with ten healthy body-able users. The grand mean averaged psychophysical stimulus pattern recognition accuracy have resulted at 98.18%, whereas the realtime EEG accuracy at 53.67%. An information-transfer-rate (ITR) scores of all the tested users have ranged from 0.042 to 4.154 bit/minute.
{"title":"Tactile brain-computer interface using classification of P300 responses evoked by full body spatial vibrotactile stimuli","authors":"Takumi Kodama, S. Makino, Tomasz M. Rutkowski","doi":"10.1109/APSIPA.2016.7820734","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820734","url":null,"abstract":"In this study we propose a novel stimulus-driven brain-computer interface (BCI) paradigm, which generates control commands based on classification of somatosensory modality P300 responses. Six spatial vibrotactile stimulus patterns are applied to entire back and limbs of a user. The aim of the current project is to validate an effectiveness of the vibrotactile stimulus patterns for BCI purposes and to establish a novel concept of tactile modality communication link, which shall help locked-in syndrome (LIS) patients, who lose their sight and hearing due to sensory disabilities. We define this approach as a full-body BCI (fbBCI) and we conduct psychophysical stimulus evaluation and realtime EEG response classification experiments with ten healthy body-able users. The grand mean averaged psychophysical stimulus pattern recognition accuracy have resulted at 98.18%, whereas the realtime EEG accuracy at 53.67%. An information-transfer-rate (ITR) scores of all the tested users have ranged from 0.042 to 4.154 bit/minute.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126445374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820788
M. Shibata, Norihito Inamuro, Takashi Ijiri, A. Hirabayashi
We propose a high accuracy algorithm for compressed sensing magnetic resonance imaging (CS-MRI) using a convex optimization technique. Lustig et al. proposed CS-MRI technique based on the minimization of a cost function defined by the sum of the data fidelity term, the 11-norm of sparsifying transform coefficients, and a total variation (TV). This function is not differentiable because of both l1-norm and TV. Hence, they used approximations of the non-differentiable terms and a nonlinear conjugate gradient algorithm was applied to minimize the approximated cost function. The obtained solution was also an approximated one, thus of low-quality. In this paper, we propose an algorithm that obtains the exact solution based on the simultaneous direction method of multipliers (SDMM), which is one of the convex optimization techniques. A simple application of SDMM to CS-MRI cannot be implemented because the transformation matrix size is proportional to the square of the image size. We solve this problem using eigenvalue decompositions. Simulations using real MR images show that the proposed algorithm outperforms the conventional one regardless of compression ratio and random sensing patterns.
{"title":"High accuracy reconstruction algorithm for CS-MRI using SDMM","authors":"M. Shibata, Norihito Inamuro, Takashi Ijiri, A. Hirabayashi","doi":"10.1109/APSIPA.2016.7820788","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820788","url":null,"abstract":"We propose a high accuracy algorithm for compressed sensing magnetic resonance imaging (CS-MRI) using a convex optimization technique. Lustig et al. proposed CS-MRI technique based on the minimization of a cost function defined by the sum of the data fidelity term, the 11-norm of sparsifying transform coefficients, and a total variation (TV). This function is not differentiable because of both l1-norm and TV. Hence, they used approximations of the non-differentiable terms and a nonlinear conjugate gradient algorithm was applied to minimize the approximated cost function. The obtained solution was also an approximated one, thus of low-quality. In this paper, we propose an algorithm that obtains the exact solution based on the simultaneous direction method of multipliers (SDMM), which is one of the convex optimization techniques. A simple application of SDMM to CS-MRI cannot be implemented because the transformation matrix size is proportional to the square of the image size. We solve this problem using eigenvalue decompositions. Simulations using real MR images show that the proposed algorithm outperforms the conventional one regardless of compression ratio and random sensing patterns.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128131593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820770
Jinju Lim, Min-Cheol Hong
This paper proposed video stabilization techniques using undesired motion detection and alpha-trimming mean filter. The proposed method consists of detecting undesired motions step and filtering the undesired motions step. The limitation on undesired motions is defined, using the local motion information. The alpha-trimming mean filter's alpha is controlled based on this limitation, so that regenerated video is controlled. The experimental results proved that the superior performance of the proposed algorithm.
{"title":"The alpha-trimming mean filter for Video stabilization","authors":"Jinju Lim, Min-Cheol Hong","doi":"10.1109/APSIPA.2016.7820770","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820770","url":null,"abstract":"This paper proposed video stabilization techniques using undesired motion detection and alpha-trimming mean filter. The proposed method consists of detecting undesired motions step and filtering the undesired motions step. The limitation on undesired motions is defined, using the local motion information. The alpha-trimming mean filter's alpha is controlled based on this limitation, so that regenerated video is controlled. The experimental results proved that the superior performance of the proposed algorithm.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132491108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820744
Zexia Liu, Guanghua Gu, Chunxia Chen, D. Cui, Chunyu Lin
Saliency object detection is the key process of identifying the location of the object. It has been widely used in numerous applications, including object recognition, image segmentation, video summarization and so on. In this paper, we proposed a saliency object detection approach based on the background priors. First, we obtain a border set by collecting the image border superpixels, in addition remove the superpixels with strong image edges out of the border set to reduce the foreground noises and obtain the true background superpixels seeds. Then, the initial saliency map can be made by computing a background saliency map based on the background seeds and fusing a centered anisotropic Gaussian distribution. Finally, we refine the initial saliency map via the smoothness constraint which encourages neighbor pixels in the image to have the same label. Experimental results on two large benchmark datasets demonstrate that the proposed algorithm performs favorably against other six state-of-art methods in terms of precision, recall and F-Measure. Our method is demonstrated to be more effective in highlighting the salient objects and reducing the background noise.
{"title":"Background priors based saliency object detection","authors":"Zexia Liu, Guanghua Gu, Chunxia Chen, D. Cui, Chunyu Lin","doi":"10.1109/APSIPA.2016.7820744","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820744","url":null,"abstract":"Saliency object detection is the key process of identifying the location of the object. It has been widely used in numerous applications, including object recognition, image segmentation, video summarization and so on. In this paper, we proposed a saliency object detection approach based on the background priors. First, we obtain a border set by collecting the image border superpixels, in addition remove the superpixels with strong image edges out of the border set to reduce the foreground noises and obtain the true background superpixels seeds. Then, the initial saliency map can be made by computing a background saliency map based on the background seeds and fusing a centered anisotropic Gaussian distribution. Finally, we refine the initial saliency map via the smoothness constraint which encourages neighbor pixels in the image to have the same label. Experimental results on two large benchmark datasets demonstrate that the proposed algorithm performs favorably against other six state-of-art methods in terms of precision, recall and F-Measure. Our method is demonstrated to be more effective in highlighting the salient objects and reducing the background noise.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130129278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820819
Chokemongkol Nadee, K. Chamnongthai
Elderly and patients with abnormalities in the control of body movement need automatic alarm with privacy protection when they fall. this paper proposes octagonal formation of ultrasonic array sensors for fall detection system in a smart room. In the method, octagonal array is designed to install ultrasonic sensors on a wall and roof. Signals of ultrasonic reflected from object in the room are processed to recognize fall. In experiments, the proposed method is proved 94% accuracy in fall detection.
{"title":"Octagonal formation of ultrasonic array sensors for fall detection","authors":"Chokemongkol Nadee, K. Chamnongthai","doi":"10.1109/APSIPA.2016.7820819","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820819","url":null,"abstract":"Elderly and patients with abnormalities in the control of body movement need automatic alarm with privacy protection when they fall. this paper proposes octagonal formation of ultrasonic array sensors for fall detection system in a smart room. In the method, octagonal array is designed to install ultrasonic sensors on a wall and roof. Signals of ultrasonic reflected from object in the room are processed to recognize fall. In experiments, the proposed method is proved 94% accuracy in fall detection.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"53 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133875656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}