首页 > 最新文献

2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering最新文献

英文 中文
Clustering centroid finding algorithm (CCFA) using spatial temporal data mining concept 基于时空数据挖掘概念的聚类质心查找算法
S. Baboo, K. Tajudin
The main aim of the research focuses the clustering centroid value for spatio-temporal data mining. Using k-means, advanced k-means algorithm and Avg Centroid (AC) clustering. The real time data of the hurricane Indian Ocean 2001 to 2010 maximum wind details are focused in this paper. The clustering is taking as selection window method, the first window form the basis of the pixel coordinate value of the screen, the second clustering window one half of the centre point value. The data mining retrieves clustering data form basis of the selection window. Here to discuss k-means algorithmic steps are very few and same iteration is continuing till the same to get the centroid point. The enhanced k-means algorithm taken more steps but result is accurate algorithmic finishing stage; iteration also repeated very minimum times. The final discussion of this paper collects average centroid clustering for all previously selected values and current selected clustering data. The result of this paper gave the comparative study of the k-means, enhanced k-means algorithms and AC clustering values.
研究的主要目的是为时空数据挖掘提供聚类质心值。采用k-means、先进的k-means算法和Avg质心聚类。本文重点介绍了2001 ~ 2010年印度洋飓风最大风场的实时资料。聚类采用选择窗口的方法,第一个窗口为屏幕像素坐标值的基础,第二个聚类窗口为中心点值的一半。数据挖掘基于选择窗口检索聚类数据。这里要讨论的是k-means算法的步骤非常少,重复迭代直到得到质心点为止。改进的k-means算法虽然步骤较多,但结果处于精确的算法精加工阶段;迭代重复的次数也非常少。本文的最后讨论收集了所有先前选择的值和当前选择的聚类数据的平均质心聚类。本文的结果对k-means、增强型k-means算法和AC聚类值进行了比较研究。
{"title":"Clustering centroid finding algorithm (CCFA) using spatial temporal data mining concept","authors":"S. Baboo, K. Tajudin","doi":"10.1109/ICPRIME.2013.6496443","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496443","url":null,"abstract":"The main aim of the research focuses the clustering centroid value for spatio-temporal data mining. Using k-means, advanced k-means algorithm and Avg Centroid (AC) clustering. The real time data of the hurricane Indian Ocean 2001 to 2010 maximum wind details are focused in this paper. The clustering is taking as selection window method, the first window form the basis of the pixel coordinate value of the screen, the second clustering window one half of the centre point value. The data mining retrieves clustering data form basis of the selection window. Here to discuss k-means algorithmic steps are very few and same iteration is continuing till the same to get the centroid point. The enhanced k-means algorithm taken more steps but result is accurate algorithmic finishing stage; iteration also repeated very minimum times. The final discussion of this paper collects average centroid clustering for all previously selected values and current selected clustering data. The result of this paper gave the comparative study of the k-means, enhanced k-means algorithms and AC clustering values.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115003171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A performance analysis and comparison of various routing protocols in MANET MANET中各种路由协议的性能分析与比较
M. Shobana, S. Karthik
Mobile Ad hoc networks (MANET) are characterized by wireless connectivity, continuous changing topology, distributed operation and ease of deployment. The data is being transmitted from source node to destination through multiple intermediate nodes ie., in a multi-hop fashion. Each node has a particular range in which the transmission takes places. When a packet is being transmitted they move from one range to the other range in the network where this may lead to packet loss due to link failure and dynamic changing nature. There are many traditional routing protocols which may prevent from this data loss, but they all are susceptible to the node mobility. Here the traditional protocols are being compared with the geographic routing protocols in terms of packet delivery ratio and transmission delay.
移动自组织网络(MANET)具有无线连接、连续变化拓扑、分布式操作和易于部署等特点。数据从源节点通过多个中间节点传输到目的节点。,以多跳的方式。每个节点都有一个特定的传输范围。当一个数据包在网络中从一个范围移动到另一个范围时,由于链路故障和动态变化的性质,这可能导致数据包丢失。有许多传统的路由协议可以防止这种数据丢失,但它们都容易受到节点移动性的影响。本文将传统协议与地理路由协议在分组传输率和传输延迟方面进行了比较。
{"title":"A performance analysis and comparison of various routing protocols in MANET","authors":"M. Shobana, S. Karthik","doi":"10.1109/ICPRIME.2013.6496508","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496508","url":null,"abstract":"Mobile Ad hoc networks (MANET) are characterized by wireless connectivity, continuous changing topology, distributed operation and ease of deployment. The data is being transmitted from source node to destination through multiple intermediate nodes ie., in a multi-hop fashion. Each node has a particular range in which the transmission takes places. When a packet is being transmitted they move from one range to the other range in the network where this may lead to packet loss due to link failure and dynamic changing nature. There are many traditional routing protocols which may prevent from this data loss, but they all are susceptible to the node mobility. Here the traditional protocols are being compared with the geographic routing protocols in terms of packet delivery ratio and transmission delay.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"175 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126754519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Recognition of Arabic numerals with grouping and ungrouping using back propagation neural network 基于反向传播神经网络的阿拉伯数字分组与解组识别
P. Selvi, Selvikrish. selvi
In this paper, the authors propose a method to recognize Arabic numerals using back propagation neural network. Arabic numerals are the ten digits that were descended from the Indian numeral system. Although the pattern of 0-9 is the same as in Indian numeral system, the glyphs vary for each numeral. The proposed method includes preprocessing of digitized handwritten image, training of BPNN and recognition phases. As a first step, the number of digits to be recognized is selected. The selected numerals are preprocessed for removal of noise and binarization. Separation process separates the numerals. Labelling, segmentation and normalization operations are performed for each of the separated numerals. The recognition phase recognizes the numerals accurately. The proposed method is implemented with Matlab coding. Sample handwritten images are tested with the proposed method and the results are plotted. With this method, the training performance rate was 99.4%. The accuracy value is calculated based on receiver operating characteristics and the confusion matrix. The value is calculated for each node in the network. The final result shows that the proposed method provides an recognition accuracy of more than 96%.
本文提出了一种基于反向传播神经网络的阿拉伯数字识别方法。阿拉伯数字是由印度数字系统演变而来的十位数。虽然0-9的模式与印度数字系统相同,但每个数字的字形不同。该方法包括数字化手写图像的预处理、bp神经网络的训练和识别阶段。作为第一步,选择要识别的位数。对所选数字进行预处理,去除噪声并进行二值化处理。分离过程将数字分开。对每个分离的数字执行标记、分割和规范化操作。识别阶段准确识别数字。用Matlab编码实现了该方法。用该方法对手写图像样本进行了测试,并绘制了测试结果。该方法的训练成功率为99.4%。根据接收机的工作特性和混淆矩阵计算精度值。该值是针对网络中的每个节点计算的。最终结果表明,该方法的识别准确率达到96%以上。
{"title":"Recognition of Arabic numerals with grouping and ungrouping using back propagation neural network","authors":"P. Selvi, Selvikrish. selvi","doi":"10.1109/ICPRIME.2013.6496494","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496494","url":null,"abstract":"In this paper, the authors propose a method to recognize Arabic numerals using back propagation neural network. Arabic numerals are the ten digits that were descended from the Indian numeral system. Although the pattern of 0-9 is the same as in Indian numeral system, the glyphs vary for each numeral. The proposed method includes preprocessing of digitized handwritten image, training of BPNN and recognition phases. As a first step, the number of digits to be recognized is selected. The selected numerals are preprocessed for removal of noise and binarization. Separation process separates the numerals. Labelling, segmentation and normalization operations are performed for each of the separated numerals. The recognition phase recognizes the numerals accurately. The proposed method is implemented with Matlab coding. Sample handwritten images are tested with the proposed method and the results are plotted. With this method, the training performance rate was 99.4%. The accuracy value is calculated based on receiver operating characteristics and the confusion matrix. The value is calculated for each node in the network. The final result shows that the proposed method provides an recognition accuracy of more than 96%.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115056640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Binary plane technique for super resolution image reconstruction using integer wavelet transform 基于整数小波变换的二值平面超分辨率图像重建技术
P. Babu, K. Prasad
Super Resolution (SR) image reconstruction is the process of producing a high resolution (HR) image from many low resolution (LR) images. SR image reconstruction can be considered as the second generation restoration technique. In this paper we propose SR image reconstruction from clean, noisy and blurred images using binary plane technique (BPT) encoding and Integer wavelet Transform (IWT). Integer wavelet transform maps an integer data set into another integer data set. Objective and subjective analysis of the reconstructed image has a better super resolution factor and a higher qualitative metrics.
超分辨率(SR)图像重建是将许多低分辨率(LR)图像生成高分辨率(HR)图像的过程。SR图像重建可以看作是第二代恢复技术。本文提出利用二值平面技术(BPT)编码和整数小波变换(IWT)从干净、噪声和模糊的图像中重建SR图像。整数小波变换将一个整数数据集映射到另一个整数数据集。客观和主观分析后的重建图像具有较好的超分辨系数和较高的定性指标。
{"title":"Binary plane technique for super resolution image reconstruction using integer wavelet transform","authors":"P. Babu, K. Prasad","doi":"10.1109/ICPRIME.2013.6496479","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496479","url":null,"abstract":"Super Resolution (SR) image reconstruction is the process of producing a high resolution (HR) image from many low resolution (LR) images. SR image reconstruction can be considered as the second generation restoration technique. In this paper we propose SR image reconstruction from clean, noisy and blurred images using binary plane technique (BPT) encoding and Integer wavelet Transform (IWT). Integer wavelet transform maps an integer data set into another integer data set. Objective and subjective analysis of the reconstructed image has a better super resolution factor and a higher qualitative metrics.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123854567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Outdoor scene image segmentgation using statistical region merging 基于统计区域合并的室外场景图像分割
A. N. Kumar, C. Jothilakshmi, M. Ilamathi, S. Kalaiselvi
A new loom of outdoor scene image segmentation algorithm is based on the region amalgamation. Here we are going to identify both structured (e.g. buildings, persons, car, etc.) and unstructured background objects (sky, road, grass, etc.) which are containing the some characteristic based on color, intensity, and texture in sequence. Our main aim is to solve the over segmented objects and strong reflection of objects. These problems are solved by using SRM (Statistical Region Merging) algorithm. In pre-processing the input image is converted into CIE (Commission Internationalde Eclairage) color space technique. Then bottom-up segmentation process is used to capture the structured and unstructured image characteristics. Another process is the Ada boost classifier which is used to classify the background objects in outdoor environment scenes. Ada boost is focused on difficult patterns. Then the contour maps are used to detect the boundary energy. Boundary detection test is the grouping of objects with a pair of connected neighboring regions. In this paper we have used an experimental result of two databases (Gould data set and Berkeley segmentation data set) and provide accurate segmentation using region merging. Finally the statistical region merging provides the groupings of images to identify the computer vision.
提出了一种基于区域融合的户外场景图像分割算法。在这里,我们将识别结构化(例如建筑物,人物,汽车等)和非结构化背景对象(天空,道路,草地等),它们包含基于颜色,强度和纹理的一些特征。我们的主要目标是解决物体的过度分割和物体的强反射。采用统计区域合并(SRM)算法解决了这些问题。在预处理中,输入的图像被转换成CIE (Commission Internationalde Eclairage)色彩空间技术。然后采用自底向上的分割方法捕获结构化和非结构化图像特征。另一个过程是Ada boost分类器,用于对室外环境场景中的背景物体进行分类。阿达提升专注于困难的模式。然后利用等高线图检测边界能量。边界检测测试是用一对相连的相邻区域对目标进行分组。本文利用两个数据库(Gould数据集和Berkeley分割数据集)的实验结果,采用区域合并的方法实现了精确的分割。最后,统计区域合并为计算机视觉识别提供图像分组。
{"title":"Outdoor scene image segmentgation using statistical region merging","authors":"A. N. Kumar, C. Jothilakshmi, M. Ilamathi, S. Kalaiselvi","doi":"10.1109/ICPRIME.2013.6496499","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496499","url":null,"abstract":"A new loom of outdoor scene image segmentation algorithm is based on the region amalgamation. Here we are going to identify both structured (e.g. buildings, persons, car, etc.) and unstructured background objects (sky, road, grass, etc.) which are containing the some characteristic based on color, intensity, and texture in sequence. Our main aim is to solve the over segmented objects and strong reflection of objects. These problems are solved by using SRM (Statistical Region Merging) algorithm. In pre-processing the input image is converted into CIE (Commission Internationalde Eclairage) color space technique. Then bottom-up segmentation process is used to capture the structured and unstructured image characteristics. Another process is the Ada boost classifier which is used to classify the background objects in outdoor environment scenes. Ada boost is focused on difficult patterns. Then the contour maps are used to detect the boundary energy. Boundary detection test is the grouping of objects with a pair of connected neighboring regions. In this paper we have used an experimental result of two databases (Gould data set and Berkeley segmentation data set) and provide accurate segmentation using region merging. Finally the statistical region merging provides the groupings of images to identify the computer vision.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125621654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Packet size based performance analysis of IEEE 802.11 WLAN comprising virtual server arrays 基于分组大小的包含虚拟服务器阵列的IEEE 802.11 WLAN性能分析
Dr. V. Karthikeyani, Mr. T. Thiruvenkadam
The current utilization of the spectrum is quite inefficient; consequently, if properly used, there is no shortage of the spectrum that is presently available. Therefore, it is anticipated that more flexible use of spectrum and spectrum sharing between radio systems will be key enablers to facilitate the successful implementation of future systems. Cognitive radio, however, is known as the most intelligent and promising technique in solving the problem of spectrum sharing. In this paper, we consider the technique of spectrum sharing among users of service providers to share the licensed spectrum of licensed service providers. It is shown that the proposed technique reduces the call blocking rate and improves the spectrum utilization.
目前对频谱的利用效率很低;因此,如果使用得当,目前可用的频谱并不短缺。因此,预计更灵活地使用频谱和无线电系统之间的频谱共享将是促进未来系统成功实施的关键因素。认知无线电被认为是解决频谱共享问题中最智能和最有前途的技术。在本文中,我们考虑了服务提供商用户之间的频谱共享技术,以共享许可服务提供商的许可频谱。结果表明,该方法降低了呼叫阻塞率,提高了频谱利用率。
{"title":"Packet size based performance analysis of IEEE 802.11 WLAN comprising virtual server arrays","authors":"Dr. V. Karthikeyani, Mr. T. Thiruvenkadam","doi":"10.1109/ICPRIME.2013.6496445","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496445","url":null,"abstract":"The current utilization of the spectrum is quite inefficient; consequently, if properly used, there is no shortage of the spectrum that is presently available. Therefore, it is anticipated that more flexible use of spectrum and spectrum sharing between radio systems will be key enablers to facilitate the successful implementation of future systems. Cognitive radio, however, is known as the most intelligent and promising technique in solving the problem of spectrum sharing. In this paper, we consider the technique of spectrum sharing among users of service providers to share the licensed spectrum of licensed service providers. It is shown that the proposed technique reduces the call blocking rate and improves the spectrum utilization.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131563238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A robust QR-Code video watermarking scheme based on SVD and DWT composite domain 基于SVD和DWT复合域的鲁棒qr码视频水印方案
G. Prabakaran, R. Bhavani, M. Ramesh
Nowadays, Digital video is one of the popular multimedia data exchanged in the internet. Commercial activity on the internet and media require protection to enhance security. The 2D Barcode with a digital watermark is a widely interesting research in the security field. In this paper propose a video watermarking with text data (verification message) by using the Quick Response (QR) Code technique. The QR Code is prepared to be watermarked via a robust video watermarking scheme based on the (singular value decomposition)SVD and (Discrete Wavelet Transform)DWT. In addition to that logo (or) watermark gives the authorized ownership of video document. SVD is an attractive algebraic transform for watermarking applications. SVD is applied to the cover I-frame. The extracted diagonal value is fused with logo (or) watermark. DWT is applied on SVD cover image and QR code image. The inverse transform on watermarked image and add the frame into video this watermarked (include logo and QR code image) the video file sends to authorized customers. In the reverse process check the logo and QR code for authorized ownership. These experimental results can achieved acceptable imperceptibility and certain robustness in video processing.
数字视频是当今互联网上流行的多媒体数据交换方式之一。互联网和媒体上的商业活动需要保护,以加强安全。带数字水印的二维条码是目前防伪领域研究的热点之一。本文提出了一种基于快速响应码(QR Code)的文本数据(验证信息)视频水印方法。采用基于奇异值分解(SVD)和离散小波变换(DWT)的鲁棒视频水印方案对二维码进行水印处理。除此之外,徽标(或)水印赋予视频文档的授权所有权。奇异值分解是一种很有吸引力的水印代数变换。将SVD应用于覆盖i型框架。提取的对角线值与标志(或)水印融合。将小波变换应用于SVD封面图像和QR码图像。对水印图像进行逆变换,并将该水印图像(包括logo和二维码图像)添加到视频文件中发送给授权客户。在相反的过程中,检查商标和QR码是否授权所有权。这些实验结果在视频处理中取得了良好的不可感知性和一定的鲁棒性。
{"title":"A robust QR-Code video watermarking scheme based on SVD and DWT composite domain","authors":"G. Prabakaran, R. Bhavani, M. Ramesh","doi":"10.1109/ICPRIME.2013.6496482","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496482","url":null,"abstract":"Nowadays, Digital video is one of the popular multimedia data exchanged in the internet. Commercial activity on the internet and media require protection to enhance security. The 2D Barcode with a digital watermark is a widely interesting research in the security field. In this paper propose a video watermarking with text data (verification message) by using the Quick Response (QR) Code technique. The QR Code is prepared to be watermarked via a robust video watermarking scheme based on the (singular value decomposition)SVD and (Discrete Wavelet Transform)DWT. In addition to that logo (or) watermark gives the authorized ownership of video document. SVD is an attractive algebraic transform for watermarking applications. SVD is applied to the cover I-frame. The extracted diagonal value is fused with logo (or) watermark. DWT is applied on SVD cover image and QR code image. The inverse transform on watermarked image and add the frame into video this watermarked (include logo and QR code image) the video file sends to authorized customers. In the reverse process check the logo and QR code for authorized ownership. These experimental results can achieved acceptable imperceptibility and certain robustness in video processing.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133699090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
A survey on Iris Segmentation methods 虹膜分割方法综述
S. Jayalakshmi, M. Sundaresan
In this paper, we have studied various well known Iris Segmentation algorithms which are used for the purpose of Iris recognition. We have gone through many algorithms based on Fourier spectral density, Limbic boundary localization, Gradient-Based edge detection and linking, Dempster-Shafer theory, Pupil detection, Fourier spectral density which will help us for accurate and efficient iris segmentation. In this paper we made a comparison of the results obtained from the implementation of existing algorithms, which will produce better result for segmentation with improved accuracy rate using the CASIA, WVU and UBIRIS databases.
在本文中,我们研究了各种著名的虹膜分割算法,用于虹膜识别的目的。我们介绍了基于傅立叶谱密度、边缘边界定位、基于梯度的边缘检测和连接、Dempster-Shafer理论、瞳孔检测、傅立叶谱密度等算法,这些算法将有助于我们准确高效地分割虹膜。在本文中,我们对现有算法实现的结果进行了比较,使用CASIA、WVU和UBIRIS数据库会产生更好的分割结果,准确率也会提高。
{"title":"A survey on Iris Segmentation methods","authors":"S. Jayalakshmi, M. Sundaresan","doi":"10.1109/ICPRIME.2013.6496513","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496513","url":null,"abstract":"In this paper, we have studied various well known Iris Segmentation algorithms which are used for the purpose of Iris recognition. We have gone through many algorithms based on Fourier spectral density, Limbic boundary localization, Gradient-Based edge detection and linking, Dempster-Shafer theory, Pupil detection, Fourier spectral density which will help us for accurate and efficient iris segmentation. In this paper we made a comparison of the results obtained from the implementation of existing algorithms, which will produce better result for segmentation with improved accuracy rate using the CASIA, WVU and UBIRIS databases.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"49 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133274182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A predominant statistical approach to identify semantic similarity of textual documents 识别文本文档语义相似度的主要统计方法
P. Vigneshvaran, E. Jayabalan, K. Vijaya
Semantic similarity is the processes of identifying similar words. It relates to computing the similarity between documents which are not lexicographically similar. This paper proposed an empirical method to estimate the semantic similarity using HBase. Specifically this paper defines various word co-occurrence in the document measured and its synonyms are also identified using WordNet. By using the statistical approaches such as MSE and MSD, similarity has been measured. This research focuses on evaluating the similarity between the key document and source documents in the document corpus. In this paper, the developed predominant tool using statistical approach has been tested by checking the similarity of the assignments submitted by the students to check the integrity of a student. This tool may also be used to identify Plagiarism of documents and to eliminate duplicates in a text repository.
语义相似是识别相似词的过程。它涉及计算字典上不相似的文档之间的相似性。本文提出了一种基于HBase的语义相似度的经验估计方法。具体来说,本文定义了被测文档中的各种词共现现象,并利用WordNet对其同义词进行了识别。通过统计方法,如MSE和MSD,测量了相似性。本研究的重点是评估文档语料库中关键文档和源文档之间的相似度。在本文中,利用统计方法开发的主要工具通过检查学生提交的作业的相似性来检查学生的完整性来进行测试。此工具还可用于识别文档的抄袭,并消除文本存储库中的重复项。
{"title":"A predominant statistical approach to identify semantic similarity of textual documents","authors":"P. Vigneshvaran, E. Jayabalan, K. Vijaya","doi":"10.1109/ICPRIME.2013.6496721","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496721","url":null,"abstract":"Semantic similarity is the processes of identifying similar words. It relates to computing the similarity between documents which are not lexicographically similar. This paper proposed an empirical method to estimate the semantic similarity using HBase. Specifically this paper defines various word co-occurrence in the document measured and its synonyms are also identified using WordNet. By using the statistical approaches such as MSE and MSD, similarity has been measured. This research focuses on evaluating the similarity between the key document and source documents in the document corpus. In this paper, the developed predominant tool using statistical approach has been tested by checking the similarity of the assignments submitted by the students to check the integrity of a student. This tool may also be used to identify Plagiarism of documents and to eliminate duplicates in a text repository.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133408396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Real time data acquisition system with self adaptive sampling rate using GPU 基于GPU的自适应采样率实时数据采集系统
J. Thomas, C. Rajasekaran
Intelligent data acquisition with real time data processing require an efficient algorithm to reduce the amount of redundant data collected during the acquisition process. Changing the sampling rate in accordance with acquired signal bandwidth will reduce the supererogatory information collected. In these case self-adaptive sampling rate is used that will continuously adapts the sample rate during the acquisition. Data are acquired continuously at fixed sample rate then the rest of the process is based on bandwidth estimation algorithm. Decimation factor for acquired signal was found out with the help of bandwidth estimation algorithm. The system optimizes the amount of data collected while retaining the same information.
实时数据处理的智能数据采集需要一种有效的算法来减少采集过程中的冗余数据量。根据采集到的信号带宽改变采样率可以减少采集到的多余信息。在这些情况下,使用自适应采样率,它将在采集过程中不断地适应采样率。以固定采样率连续采集数据,剩余的过程基于带宽估计算法。利用带宽估计算法求出采集信号的抽取因子。该系统在保留相同信息的同时优化了收集的数据量。
{"title":"Real time data acquisition system with self adaptive sampling rate using GPU","authors":"J. Thomas, C. Rajasekaran","doi":"10.1109/ICPRIME.2013.6496460","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496460","url":null,"abstract":"Intelligent data acquisition with real time data processing require an efficient algorithm to reduce the amount of redundant data collected during the acquisition process. Changing the sampling rate in accordance with acquired signal bandwidth will reduce the supererogatory information collected. In these case self-adaptive sampling rate is used that will continuously adapts the sample rate during the acquisition. Data are acquired continuously at fixed sample rate then the rest of the process is based on bandwidth estimation algorithm. Decimation factor for acquired signal was found out with the help of bandwidth estimation algorithm. The system optimizes the amount of data collected while retaining the same information.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130904291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1