Pub Date : 2015-03-31DOI: 10.6109/jicce.2015.13.1.001
Ye Shan, Ming Li, Minglu Jin
In this study, we consider visible light communication in an indoor line-of-sight environment. It has been proved that among the multiple input multiple output (MIMO) techniques, spatial modulation (SM) performs better than repetition coding (RC) and spatial multiplexing (SMP). On the basis of a combination of SM and pulse amplitude modulation (PAM), here, we propose an enhanced SM algorithm to improve the bit error rate. Traditional SM activates only one light-emitting diode (LED) at one time, and the proposed enhanced SM activates two LEDs at one time and reduces the intensity levels of PAM by half. Under the condition of a highly correlated channel, power imbalance is used to improve the algorithm performance. The comparison between the two schemes is implemented at the same signal-to-noise ratio. The simulation results illustrate that the enhanced SM outperforms the traditional SM in both highly correlated and lowly correlated channels. Furthermore, the proposed enhanced SM scheme can increase the transmission rate in most cases.
{"title":"Enhanced Spatial Modulation of Indoor Visible Light Communication","authors":"Ye Shan, Ming Li, Minglu Jin","doi":"10.6109/jicce.2015.13.1.001","DOIUrl":"https://doi.org/10.6109/jicce.2015.13.1.001","url":null,"abstract":"In this study, we consider visible light communication in an indoor line-of-sight environment. It has been proved that among the multiple input multiple output (MIMO) techniques, spatial modulation (SM) performs better than repetition coding (RC) and spatial multiplexing (SMP). On the basis of a combination of SM and pulse amplitude modulation (PAM), here, we propose an enhanced SM algorithm to improve the bit error rate. Traditional SM activates only one light-emitting diode (LED) at one time, and the proposed enhanced SM activates two LEDs at one time and reduces the intensity levels of PAM by half. Under the condition of a highly correlated channel, power imbalance is used to improve the algorithm performance. The comparison between the two schemes is implemented at the same signal-to-noise ratio. The simulation results illustrate that the enhanced SM outperforms the traditional SM in both highly correlated and lowly correlated channels. Furthermore, the proposed enhanced SM scheme can increase the transmission rate in most cases.","PeriodicalId":272551,"journal":{"name":"J. Inform. and Commun. Convergence Engineering","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134117673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-31DOI: 10.6109/jicce.2015.13.1.050
K. Ryoo, Jeong‐Bong Lee
We report design and simulation of a two-dimensional (2D) silicon-based nanophotonic crystal as an optical insulator to enhance the light emission efficiency of light-emitting diodes (LEDs). The device was designed in a manner that a triangular array silicon photonic crystal light insulator has a square trench in the middle where LED can be placed. By varying the normalized radius in the range of 0.3?0.5 using plane wave expansion method (PWEM), we found that the normalized radius of 0.45 creates a large band gap for transverse electric (TE) polarization. Subsequently a series of light propagation simulation were carried out using 2D and three-dimensional (3D) finite-difference time-domain (FDTD). The designed silicon-based light insulator device shows optical characteristics of a region in which light propagation was forbidden in the horizontal plane for TE light with most of the visible light spectrum in the wavelength range of 450 nm to 600 nm.
{"title":"Visible Wavelength Photonic Insulator for Enhancing LED Light Emission","authors":"K. Ryoo, Jeong‐Bong Lee","doi":"10.6109/jicce.2015.13.1.050","DOIUrl":"https://doi.org/10.6109/jicce.2015.13.1.050","url":null,"abstract":"We report design and simulation of a two-dimensional (2D) silicon-based nanophotonic crystal as an optical insulator to enhance the light emission efficiency of light-emitting diodes (LEDs). The device was designed in a manner that a triangular array silicon photonic crystal light insulator has a square trench in the middle where LED can be placed. By varying the normalized radius in the range of 0.3?0.5 using plane wave expansion method (PWEM), we found that the normalized radius of 0.45 creates a large band gap for transverse electric (TE) polarization. Subsequently a series of light propagation simulation were carried out using 2D and three-dimensional (3D) finite-difference time-domain (FDTD). The designed silicon-based light insulator device shows optical characteristics of a region in which light propagation was forbidden in the horizontal plane for TE light with most of the visible light spectrum in the wavelength range of 450 nm to 600 nm.","PeriodicalId":272551,"journal":{"name":"J. Inform. and Commun. Convergence Engineering","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129502915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-31DOI: 10.6109/jicce.2015.13.1.007
Tae-Hoon Kim, D. Tipper, P. Krishnamurthy
In a multi-hop wireless network, connectivity is determined by the link that is established by the receiving signal strength computed by subtracting the path loss from the transmission power. Two path loss models are commonly used in research, namely two-ray ground and shadow fading, which determine the receiving signal strength and affect the link quality. Link quality is one of the key factors that affect network performance. In general, network performance improves with better link quality in a wireless network. In this study, we measure the network connectivity and performance in a shadow fading path loss model, and our observation shows that both are severely degraded in this path loss model. To improve network performance, we propose power control schemes utilizing link quality to identify the set of nodes required to adjust the transmission power in order to improve the network throughput in both homogeneous and heterogeneous multi-hop wireless networks. Numerical studies to evaluate the proposed schemes are presented and compared.
{"title":"Improving the Performance of Multi-Hop Wireless Networks by Selective Transmission Power Control","authors":"Tae-Hoon Kim, D. Tipper, P. Krishnamurthy","doi":"10.6109/jicce.2015.13.1.007","DOIUrl":"https://doi.org/10.6109/jicce.2015.13.1.007","url":null,"abstract":"In a multi-hop wireless network, connectivity is determined by the link that is established by the receiving signal strength computed by subtracting the path loss from the transmission power. Two path loss models are commonly used in research, namely two-ray ground and shadow fading, which determine the receiving signal strength and affect the link quality. Link quality is one of the key factors that affect network performance. In general, network performance improves with better link quality in a wireless network. In this study, we measure the network connectivity and performance in a shadow fading path loss model, and our observation shows that both are severely degraded in this path loss model. To improve network performance, we propose power control schemes utilizing link quality to identify the set of nodes required to adjust the transmission power in order to improve the network throughput in both homogeneous and heterogeneous multi-hop wireless networks. Numerical studies to evaluate the proposed schemes are presented and compared.","PeriodicalId":272551,"journal":{"name":"J. Inform. and Commun. Convergence Engineering","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129554476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-31DOI: 10.6109/jicce.2015.13.1.042
Zhihua Wang, Yongri Piao, Minglu Jin
In laser pointer interaction systems, laser spot detection is one of the most important technologies, and most of the challenges in this area are related to the varying backgrounds, and the real-time performance of the interaction system. In this paper, we present a robust dictionary construction and update algorithm based on a sparse model of background subtraction. In order to control dynamic backgrounds, first, we determine whether there is a change in the backgrounds; if this is true, the new background can be directly added to the dictionary configurations; otherwise, we run an online cumulative average on the backgrounds to update the dictionary. The proposed dictionary construction and update algorithm for laser spot detection, is robust to the varying backgrounds and noises, and can be implemented in real time. A large number of experimental results have confirmed the superior performance of the proposed method in terms of the detection error and real-time implementation.
{"title":"Laser Spot Detection Using Robust Dictionary Construction and Update","authors":"Zhihua Wang, Yongri Piao, Minglu Jin","doi":"10.6109/jicce.2015.13.1.042","DOIUrl":"https://doi.org/10.6109/jicce.2015.13.1.042","url":null,"abstract":"In laser pointer interaction systems, laser spot detection is one of the most important technologies, and most of the challenges in this area are related to the varying backgrounds, and the real-time performance of the interaction system. In this paper, we present a robust dictionary construction and update algorithm based on a sparse model of background subtraction. In order to control dynamic backgrounds, first, we determine whether there is a change in the backgrounds; if this is true, the new background can be directly added to the dictionary configurations; otherwise, we run an online cumulative average on the backgrounds to update the dictionary. The proposed dictionary construction and update algorithm for laser spot detection, is robust to the varying backgrounds and noises, and can be implemented in real time. A large number of experimental results have confirmed the superior performance of the proposed method in terms of the detection error and real-time implementation.","PeriodicalId":272551,"journal":{"name":"J. Inform. and Commun. Convergence Engineering","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132083489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-31DOI: 10.6109/jicce.2015.13.1.027
Hwajeong Seo, Howon Kim
Multiprecision multiplication is the most expensive operation in public key-based cryptography. Therefore, many multiplication methods have been studied intensively for several decades. In Workshop on Cryptographic Hardware and Embedded Systems 2011 (CHES2011), a novel multiplication method called ‘operand caching’ was proposed. This method reduces the number of required load instructions by caching the operands. However, it does not provide full operand caching when changing the row of partial products. To overcome this problem, a novel method, that is, ‘consecutive operand caching’ was proposed in Workshop on Information Security Applications 2012 (WISA2012). It divides a multiplication structure into partial products and reconstructs them to share common operands between previous and next partial products. However, there is still room for improvement; therefore, we propose a finely designed operand-caching mode to minimize useless memory accesses when the first row is changed. Finally, we reduce the number of memory access instructions and boost the speed of the overall multiprecision multiplication for public key cryptography.
{"title":"Consecutive Operand-Caching Method for Multiprecision Multiplication, Revisited","authors":"Hwajeong Seo, Howon Kim","doi":"10.6109/jicce.2015.13.1.027","DOIUrl":"https://doi.org/10.6109/jicce.2015.13.1.027","url":null,"abstract":"Multiprecision multiplication is the most expensive operation in public key-based cryptography. Therefore, many multiplication methods have been studied intensively for several decades. In Workshop on Cryptographic Hardware and Embedded Systems 2011 (CHES2011), a novel multiplication method called ‘operand caching’ was proposed. This method reduces the number of required load instructions by caching the operands. However, it does not provide full operand caching when changing the row of partial products. To overcome this problem, a novel method, that is, ‘consecutive operand caching’ was proposed in Workshop on Information Security Applications 2012 (WISA2012). It divides a multiplication structure into partial products and reconstructs them to share common operands between previous and next partial products. However, there is still room for improvement; therefore, we propose a finely designed operand-caching mode to minimize useless memory accesses when the first row is changed. Finally, we reduce the number of memory access instructions and boost the speed of the overall multiprecision multiplication for public key cryptography.","PeriodicalId":272551,"journal":{"name":"J. Inform. and Commun. Convergence Engineering","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127643604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-31DOI: 10.6109/jicce.2014.12.4.257
Soo-Tai Nam, Chan-yong Jin, Jae-Yeon Sim
Meta-analysis is a statistical integration method that delivers an opportunity to overview the entire result by integrating and analyzing many quantitative research results. This study will find meaningful mediator variables for criterion variables that affect purchase and repurchase intentions in e-commerce, on the basis of the results of a meta-analysis. We reviewed a total of 114 e-commerce studies published in Korean journals between 2000 and 2014, where a cause and effect relationship is established between variables that are specified in the conceptual model of this study. In this meta-analysis, the path between trust and purchase intention showed the biggest effect size. The second biggest effect size was found in the path between commitment and purchase intention, while the smallest one was obtained with perceived. Thus, we present the theoretical and practical implications of these results and discuss the differences among these results through a comparative analysis with previous studies.
{"title":"A Meta-analysis of the Relationship between Mediator Factors and Purchasing Intention in E-commerce Studies","authors":"Soo-Tai Nam, Chan-yong Jin, Jae-Yeon Sim","doi":"10.6109/jicce.2014.12.4.257","DOIUrl":"https://doi.org/10.6109/jicce.2014.12.4.257","url":null,"abstract":"Meta-analysis is a statistical integration method that delivers an opportunity to overview the entire result by integrating and analyzing many quantitative research results. This study will find meaningful mediator variables for criterion variables that affect purchase and repurchase intentions in e-commerce, on the basis of the results of a meta-analysis. We reviewed a total of 114 e-commerce studies published in Korean journals between 2000 and 2014, where a cause and effect relationship is established between variables that are specified in the conceptual model of this study. In this meta-analysis, the path between trust and purchase intention showed the biggest effect size. The second biggest effect size was found in the path between commitment and purchase intention, while the smallest one was obtained with perceived. Thus, we present the theoretical and practical implications of these results and discuss the differences among these results through a comparative analysis with previous studies.","PeriodicalId":272551,"journal":{"name":"J. Inform. and Commun. Convergence Engineering","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131927467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-31DOI: 10.6109/jicce.2014.12.4.215
Y. Kim
We consider quantization optimized for distributed estimation, where a set of sensors at different sites collect measurements on the parameter of interest, quantize them, and transmit the quantized data to a fusion node, which then estimates the parameter. Here, we propose an iterative quantizer design algorithm with a weighted distance rule that allows us to reduce a system-wide metric such as the estimation error by constructing quantization partitions with their optimal weights. We show that the search for the weights, the most expensive computational step in the algorithm, can be conducted in a sequential manner without deviating from convergence, leading to a significant reduction in design complexity. Our experments demonstrate that the proposed algorithm achieves improved performance over traditional quantizer designs. The benefit of the proposed technique is further illustrated by the experiments providing similar estimation performance with much lower complexity as compared to the recently published novel algorithms.
{"title":"Weighted Distance-Based Quantization for Distributed Estimation","authors":"Y. Kim","doi":"10.6109/jicce.2014.12.4.215","DOIUrl":"https://doi.org/10.6109/jicce.2014.12.4.215","url":null,"abstract":"We consider quantization optimized for distributed estimation, where a set of sensors at different sites collect measurements on the parameter of interest, quantize them, and transmit the quantized data to a fusion node, which then estimates the parameter. Here, we propose an iterative quantizer design algorithm with a weighted distance rule that allows us to reduce a system-wide metric such as the estimation error by constructing quantization partitions with their optimal weights. We show that the search for the weights, the most expensive computational step in the algorithm, can be conducted in a sequential manner without deviating from convergence, leading to a significant reduction in design complexity. Our experments demonstrate that the proposed algorithm achieves improved performance over traditional quantizer designs. The benefit of the proposed technique is further illustrated by the experiments providing similar estimation performance with much lower complexity as compared to the recently published novel algorithms.","PeriodicalId":272551,"journal":{"name":"J. Inform. and Commun. Convergence Engineering","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129197663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-31DOI: 10.6109/jicce.2014.12.4.228
S. S. Husain, A. Prasad, A. Kunz, Apostolos Papageorgiou, Jaeseung Song
One of major purposes of these standard technologies is to ensure interoperability between entities from different vendors and enable interworking between various technologies. As interoperability and interworking are essential for machine-to-machine communications (M2M) and Internet of Things (IoT) for them to achieve their ultimate goal, i.e., things to be connected each other, multiple standards organizations are now working on development M2M/IoT related specifications. This paper reviews the current activities of some of the most relevant standardization bodies in the area of M2M and IoT: third-generation partnership project (3GPP) core and radio network aspects, broadband forum, and oneM2M. The major features and issues being focused upon in these standards bodies are summarized. Finally, some key common trends among the different bodies are identified: a common service layer platform, new technologies mitigating an explosive growth of network traffic, and considerations and efforts related to the development of device management technologies.
{"title":"Recent Trends in Standards Related to the Internet of Things and Machine-to-Machine Communications","authors":"S. S. Husain, A. Prasad, A. Kunz, Apostolos Papageorgiou, Jaeseung Song","doi":"10.6109/jicce.2014.12.4.228","DOIUrl":"https://doi.org/10.6109/jicce.2014.12.4.228","url":null,"abstract":"One of major purposes of these standard technologies is to ensure interoperability between entities from different vendors and enable interworking between various technologies. As interoperability and interworking are essential for machine-to-machine communications (M2M) and Internet of Things (IoT) for them to achieve their ultimate goal, i.e., things to be connected each other, multiple standards organizations are now working on development M2M/IoT related specifications. This paper reviews the current activities of some of the most relevant standardization bodies in the area of M2M and IoT: third-generation partnership project (3GPP) core and radio network aspects, broadband forum, and oneM2M. The major features and issues being focused upon in these standards bodies are summarized. Finally, some key common trends among the different bodies are identified: a common service layer platform, new technologies mitigating an explosive growth of network traffic, and considerations and efforts related to the development of device management technologies.","PeriodicalId":272551,"journal":{"name":"J. Inform. and Commun. Convergence Engineering","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126913687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-31DOI: 10.6109/JICCE.2014.12.4.221
Junhui Zhao, Rong Ran, Chang-Heon Oh, Jeong-Wook Seo
In this paper, we analyze the effect of the coherence bandwidth of wireless channels on leakage suppression methods for discrete Fourier transform (DFT)-based channel estimation in orthogonal frequency division multiplexing (OFDM) systems. Virtual carriers in an OFDM symbol cause orthogonality loss in DFT-based channel estimation, which is referred to as the leakage problem. In order to solve the leakage problem, optimal and suboptimal methods have already been proposed. However, according to our analysis, the performance of these methods highly depends on the coherence bandwidth of wireless channels. If some of the estimated channel frequency responses are placed outside the coherence bandwidth, a channel estimation error occurs and the entire performance worsens in spite of a high signal-to-noise ratio.
{"title":"Analysis of the Effect of Coherence Bandwidth on Leakage Suppression Methods for OFDM Channel Estimation","authors":"Junhui Zhao, Rong Ran, Chang-Heon Oh, Jeong-Wook Seo","doi":"10.6109/JICCE.2014.12.4.221","DOIUrl":"https://doi.org/10.6109/JICCE.2014.12.4.221","url":null,"abstract":"In this paper, we analyze the effect of the coherence bandwidth of wireless channels on leakage suppression methods for discrete Fourier transform (DFT)-based channel estimation in orthogonal frequency division multiplexing (OFDM) systems. Virtual carriers in an OFDM symbol cause orthogonality loss in DFT-based channel estimation, which is referred to as the leakage problem. In order to solve the leakage problem, optimal and suboptimal methods have already been proposed. However, according to our analysis, the performance of these methods highly depends on the coherence bandwidth of wireless channels. If some of the estimated channel frequency responses are placed outside the coherence bandwidth, a channel estimation error occurs and the entire performance worsens in spite of a high signal-to-noise ratio.","PeriodicalId":272551,"journal":{"name":"J. Inform. and Commun. Convergence Engineering","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116745300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-31DOI: 10.6109/jicce.2014.12.4.263
A. Mohapatra, Sunita Sarangi, S. Patnaik, S. Sabut
Corner detection and feature extraction are essential aspects of computer vision problems such as object recognition and tracking. Feature detectors such as Scale Invariant Feature Transform (SIFT) yields high quality features but computationally intensive for use in real-time applications. The Features from Accelerated Segment Test (FAST) detector provides faster feature computation by extracting only corner information in recognising an object. In this paper we have analyzed the efficient object detection algorithms with respect to efficiency, quality and robustness by comparing characteristics of image detectors for corner detector and feature extractors. The simulated result shows that compared to conventional SIFT algorithm, the object recognition system based on the FAST corner detector yields increased speed and low performance degradation. The average time to find keypoints in SIFT method is about 0.116 seconds for extracting 2169 keypoints. Similarly the average time to find corner points was 0.651 seconds for detecting 1714 keypoints in FAST methods at threshold 30. Thus the FAST method detects corner points faster with better quality images for object recognition.
角点检测和特征提取是物体识别和跟踪等计算机视觉问题的重要方面。特征检测器,如尺度不变特征变换(SIFT)产生高质量的特征,但在实时应用中使用计算量很大。FAST (Features from Accelerated Segment Test)检测器在识别目标时只提取角点信息,从而提高了特征计算速度。本文通过比较角点检测器和特征提取器图像检测器的特点,从效率、质量和鲁棒性等方面分析了有效的目标检测算法。仿真结果表明,与传统的SIFT算法相比,基于FAST角点检测器的目标识别系统具有更快的速度和更低的性能退化。SIFT方法提取2169个关键点,平均寻找关键点的时间约为0.116秒。类似地,在阈值为30的情况下,FAST方法检测1714个关键点时找到拐角点的平均时间为0.651秒。因此,FAST方法检测角点的速度更快,图像质量更好,用于物体识别。
{"title":"Comparative Study of Corner and Feature Extractors for Real-Time Object Recognition in Image Processing","authors":"A. Mohapatra, Sunita Sarangi, S. Patnaik, S. Sabut","doi":"10.6109/jicce.2014.12.4.263","DOIUrl":"https://doi.org/10.6109/jicce.2014.12.4.263","url":null,"abstract":"Corner detection and feature extraction are essential aspects of computer vision problems such as object recognition and tracking. Feature detectors such as Scale Invariant Feature Transform (SIFT) yields high quality features but computationally intensive for use in real-time applications. The Features from Accelerated Segment Test (FAST) detector provides faster feature computation by extracting only corner information in recognising an object. In this paper we have analyzed the efficient object detection algorithms with respect to efficiency, quality and robustness by comparing characteristics of image detectors for corner detector and feature extractors. The simulated result shows that compared to conventional SIFT algorithm, the object recognition system based on the FAST corner detector yields increased speed and low performance degradation. The average time to find keypoints in SIFT method is about 0.116 seconds for extracting 2169 keypoints. Similarly the average time to find corner points was 0.651 seconds for detecting 1714 keypoints in FAST methods at threshold 30. Thus the FAST method detects corner points faster with better quality images for object recognition.","PeriodicalId":272551,"journal":{"name":"J. Inform. and Commun. Convergence Engineering","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122784854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}