首页 > 最新文献

2015 Eighth International Conference on Contemporary Computing (IC3)最新文献

英文 中文
Improved recognition rate of language identification system in noisy environment 提高了噪声环境下语言识别系统的识别率
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346681
Randheer Bagi, Jainath Yadav, K. S. Rao
Spoken language identification is a technique to model and classify the language, spoken by an unknown person. Language identification task is more challenging in environmental condition due to addition of different types of noise. Presence of noise in speech signal causes several nuisances. This paper covers several aspect of language identification in noisy environment. Experiments have been carried out using speaker independent Multilingual Indian Language Speech Corpus of Indian Institute of Technology, Kharagpur (IITKGP-MLILSC). In the proposed method, acoustic features are extracted from the raw speech signal. Gaussian Mixture Models (GMMs) are used to train the language models. To analyze the behavior of the identification system in a noisy environment, white noise is added into the clean speech corpus at different noise levels. Recognition rate of noisy speech was near about 14.84%. Significant performance degradation was observed compared to the clean speech. To overcome this adverse identification condition, reduction of noise is necessary. Spectral Subtraction (SS) and Minimum Mean Square Error (MMSE) are used to suppress the noise. The overall average recognition rate of the proposed system using clean speech is 56.48%. In case of enhanced speech using SS and MMSE, recognition rate is 35.91% and 35.53% respectively, which is significant improvement over the recognition rate of noisy speech (14.84%).
口语识别是一种对未知的人所说的语言进行建模和分类的技术。在环境条件下,由于不同类型的噪声的加入,使得语言识别任务更具挑战性。语音信号中噪声的存在会引起一些干扰。本文讨论了噪声环境下语言识别的几个方面。实验使用了印度理工学院(IITKGP-MLILSC)的独立于说话人的多语种印度语言语音语料库进行。在该方法中,从原始语音信号中提取声学特征。使用高斯混合模型(GMMs)来训练语言模型。为了分析识别系统在噪声环境下的行为,在不同噪声水平下的清洁语音语料库中加入白噪声。噪声语音的识别率接近14.84%。与干净的语音相比,观察到明显的性能下降。为了克服这种不利的识别条件,必须降低噪声。采用谱减法(SS)和最小均方误差(MMSE)抑制噪声。使用干净语音的系统总体平均识别率为56.48%。在使用SS和MMSE增强语音的情况下,识别率分别为35.91%和35.53%,比有噪声语音的识别率(14.84%)有显著提高。
{"title":"Improved recognition rate of language identification system in noisy environment","authors":"Randheer Bagi, Jainath Yadav, K. S. Rao","doi":"10.1109/IC3.2015.7346681","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346681","url":null,"abstract":"Spoken language identification is a technique to model and classify the language, spoken by an unknown person. Language identification task is more challenging in environmental condition due to addition of different types of noise. Presence of noise in speech signal causes several nuisances. This paper covers several aspect of language identification in noisy environment. Experiments have been carried out using speaker independent Multilingual Indian Language Speech Corpus of Indian Institute of Technology, Kharagpur (IITKGP-MLILSC). In the proposed method, acoustic features are extracted from the raw speech signal. Gaussian Mixture Models (GMMs) are used to train the language models. To analyze the behavior of the identification system in a noisy environment, white noise is added into the clean speech corpus at different noise levels. Recognition rate of noisy speech was near about 14.84%. Significant performance degradation was observed compared to the clean speech. To overcome this adverse identification condition, reduction of noise is necessary. Spectral Subtraction (SS) and Minimum Mean Square Error (MMSE) are used to suppress the noise. The overall average recognition rate of the proposed system using clean speech is 56.48%. In case of enhanced speech using SS and MMSE, recognition rate is 35.91% and 35.53% respectively, which is significant improvement over the recognition rate of noisy speech (14.84%).","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121235291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
DocTool - a tool for visualizing software projects using graph database 一个使用图形数据库可视化软件项目的工具
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346721
A. Sadar, J. VinithaPanicker
In an organization a software development life cycle consists of teams working in different structural hierarchies. Maintaining a complex software where continuous additions and updations are performed by different developers is a challenging task. Also there is a certain amount of latency in communication between two teams regarding an entity of interest in the software. Different software visualization tools have been proposed to address these issues. Many of them provide a view of the software structure by parsing through the source code and analyzing the depth and quality of code. In this paper we propose a DocTool which provide a simple and easy to use solution to two problems: (i) Visualizing the entities of a software and their properties and (ii) Visualizing the workflow in the software. The tool uses a set of json files and a graph database as the backbone. The solution proposed is very simple and provides the user total control over the data he wants to focus on. The tool can be implemented for softwares developed in any kind of platform. The design and implementation of the tool for a Java Web Application software are discussed in this paper.
在一个组织中,软件开发生命周期由在不同结构层次中工作的团队组成。维护一个复杂的软件是一项具有挑战性的任务,其中不断添加和更新由不同的开发人员执行。此外,两个团队之间关于软件中感兴趣的实体的通信也存在一定的延迟。已经提出了不同的软件可视化工具来解决这些问题。它们中的许多通过解析源代码和分析代码的深度和质量来提供软件结构的视图。在本文中,我们提出了一个DocTool,它提供了一个简单易用的解决方案来解决两个问题:(i)可视化软件的实体及其属性和(ii)可视化软件中的工作流。该工具使用一组json文件和一个图形数据库作为主干。提出的解决方案非常简单,并为用户提供了对他想要关注的数据的完全控制。该工具可用于在任何平台上开发的软件。本文讨论了一个Java Web应用软件工具的设计与实现。
{"title":"DocTool - a tool for visualizing software projects using graph database","authors":"A. Sadar, J. VinithaPanicker","doi":"10.1109/IC3.2015.7346721","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346721","url":null,"abstract":"In an organization a software development life cycle consists of teams working in different structural hierarchies. Maintaining a complex software where continuous additions and updations are performed by different developers is a challenging task. Also there is a certain amount of latency in communication between two teams regarding an entity of interest in the software. Different software visualization tools have been proposed to address these issues. Many of them provide a view of the software structure by parsing through the source code and analyzing the depth and quality of code. In this paper we propose a DocTool which provide a simple and easy to use solution to two problems: (i) Visualizing the entities of a software and their properties and (ii) Visualizing the workflow in the software. The tool uses a set of json files and a graph database as the backbone. The solution proposed is very simple and provides the user total control over the data he wants to focus on. The tool can be implemented for softwares developed in any kind of platform. The design and implementation of the tool for a Java Web Application software are discussed in this paper.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131887020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Scientometric analysis of computer science research in India 印度计算机科学研究的科学计量学分析
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346675
Khushboo Singhal, S. Banshal, A. Uddin, V. Singh
This paper presents results of our Scientometrics and text-based analysis of computer science research output from India during the last 25 years. We have collected the data for research output indexed in Scopus and performed a detailed computational analysis to obtain important indicators, such as total research output, citation impact, collaboration patterns, top institutions/authors/publication sources. We also performed a text-based analysis on keywords of all papers indexed in Scopus to identify thematic trends during the period. The analytical results present a detailed and useful picture of status and competence of CS domain research in India.
本文介绍了我们对过去25年印度计算机科学研究成果的科学计量学和基于文本的分析结果。我们收集了Scopus检索的研究产出数据,并进行了详细的计算分析,以获得研究总产出、引文影响、合作模式、顶级机构/作者/出版来源等重要指标。我们还对Scopus检索的所有论文的关键词进行了基于文本的分析,以确定这一时期的主题趋势。分析结果为印度CS领域研究的现状和能力提供了一幅详细而有用的图景。
{"title":"A Scientometric analysis of computer science research in India","authors":"Khushboo Singhal, S. Banshal, A. Uddin, V. Singh","doi":"10.1109/IC3.2015.7346675","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346675","url":null,"abstract":"This paper presents results of our Scientometrics and text-based analysis of computer science research output from India during the last 25 years. We have collected the data for research output indexed in Scopus and performed a detailed computational analysis to obtain important indicators, such as total research output, citation impact, collaboration patterns, top institutions/authors/publication sources. We also performed a text-based analysis on keywords of all papers indexed in Scopus to identify thematic trends during the period. The analytical results present a detailed and useful picture of status and competence of CS domain research in India.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134312091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Removing occlusion using even odd interlacing for efficient class room teaching 利用奇偶交错消除遮挡,提高课堂教学效率
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346712
V. Saxena, Ananya, Juhi Shukla
Occlusion is any unwanted object that hinders the view of the objects or scenes behind them. Many times the viewer cannot comprehend what is present in the background due to the continuous presence of the occlusion. In a similar setting of classroom /boardroom occlusion of the blackboard/whiteboard contents because of the presence of certain objects like teachers and other students walking in front of the board, leads to missing out on the valuable components of board contents. Capturing the board content in presence of the such occlusions is a difficult task. The work presented in this paper focuses particularly on the meeting and/or classroom that use whiteboard/blackboard heavily for lectures, project planning meetings etc. In these scenarios, boards are considered to be necessary. But the contents of the boards are lost most of the time without even being archived. Consequently, a tool to assist the students in the task of documenting the board content by removing the objects in front of the board is highly desirable. In this paper, we have developed the method to remove occlusion present in front of board by video inpainting and further we used even odd interlacing and contour formation to improve the results.
遮挡是任何不需要的物体,它阻碍了对物体或它们后面的场景的观察。很多时候,由于遮挡的持续存在,观众无法理解背景中存在什么。在类似的教室/会议室环境中,由于某些物体的存在,比如老师和其他学生走在黑板前面,黑板/白板的内容被遮挡了,导致错过了黑板内容的有价值的组成部分。在存在这种遮挡的情况下捕获板子内容是一项困难的任务。本文介绍的工作主要集中在会议和/或课堂上,这些会议和/或课堂上大量使用白板/黑板进行讲座、项目规划会议等。在这些情况下,电路板被认为是必要的。但大多数时候,这些板上的内容甚至没有存档就丢失了。因此,一个工具,以协助学生在任务中记录板的内容,删除在板前面的对象是非常可取的。在本文中,我们开发了一种通过视频补绘来去除板面遮挡的方法,并进一步使用奇偶隔行和轮廓形成来改善效果。
{"title":"Removing occlusion using even odd interlacing for efficient class room teaching","authors":"V. Saxena, Ananya, Juhi Shukla","doi":"10.1109/IC3.2015.7346712","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346712","url":null,"abstract":"Occlusion is any unwanted object that hinders the view of the objects or scenes behind them. Many times the viewer cannot comprehend what is present in the background due to the continuous presence of the occlusion. In a similar setting of classroom /boardroom occlusion of the blackboard/whiteboard contents because of the presence of certain objects like teachers and other students walking in front of the board, leads to missing out on the valuable components of board contents. Capturing the board content in presence of the such occlusions is a difficult task. The work presented in this paper focuses particularly on the meeting and/or classroom that use whiteboard/blackboard heavily for lectures, project planning meetings etc. In these scenarios, boards are considered to be necessary. But the contents of the boards are lost most of the time without even being archived. Consequently, a tool to assist the students in the task of documenting the board content by removing the objects in front of the board is highly desirable. In this paper, we have developed the method to remove occlusion present in front of board by video inpainting and further we used even odd interlacing and contour formation to improve the results.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"238 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123037582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Content aware targeted image manipulation to reduce power consumption in OLED panels 内容感知目标图像处理,以减少OLED面板的功耗
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346727
Prafulla Kumar Choubey, A. K. Singh, Raghu B. Bankapur, SB VaisakhP.C., B. ManojPrabhu
FHD, QHD and UHD class of high pixel density panel enables precise rendering and smoothens the display quality of images. But driving such high resolution panel requires high power both on panel and rendering (GPU) side. In smartphones especially, where content is usually UI, it doesn't demand resolutions like 2k and 4k, unless it's graphics/multimedia. This paper proposes a display content and human visual acuity aware technique to reduce oled panel power consumption by turning off selective subpixels in specific regions of display. This technique uses the fact that power consumption in oled panels has direct relationship with color and intensity of each pixel and considerable power can be saved with intelligent control at pixel/subpixel level. Results obtained through first PoC implementation showed power saving up to 28% depending on aggressiveness of implementation.
FHD, QHD和UHD类的高像素密度面板,能够精确渲染和平滑图像的显示质量。但驱动如此高分辨率的面板需要面板和渲染(GPU)方面的高功率。特别是在智能手机中,内容通常是UI,它不需要像2k和4k这样的分辨率,除非它是图像/多媒体。本文提出了一种显示内容和人类视觉灵敏度感知技术,通过关闭显示特定区域的选择性子像素来降低oled面板的功耗。该技术利用了oled面板的功耗与每个像素的颜色和强度直接相关的事实,并且通过像素/亚像素级别的智能控制可以节省大量功率。通过首次PoC实现获得的结果显示,根据实施的积极程度,可节省高达28%的电力。
{"title":"Content aware targeted image manipulation to reduce power consumption in OLED panels","authors":"Prafulla Kumar Choubey, A. K. Singh, Raghu B. Bankapur, SB VaisakhP.C., B. ManojPrabhu","doi":"10.1109/IC3.2015.7346727","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346727","url":null,"abstract":"FHD, QHD and UHD class of high pixel density panel enables precise rendering and smoothens the display quality of images. But driving such high resolution panel requires high power both on panel and rendering (GPU) side. In smartphones especially, where content is usually UI, it doesn't demand resolutions like 2k and 4k, unless it's graphics/multimedia. This paper proposes a display content and human visual acuity aware technique to reduce oled panel power consumption by turning off selective subpixels in specific regions of display. This technique uses the fact that power consumption in oled panels has direct relationship with color and intensity of each pixel and considerable power can be saved with intelligent control at pixel/subpixel level. Results obtained through first PoC implementation showed power saving up to 28% depending on aggressiveness of implementation.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122578281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Gray scale image watermarking using fuzzy entropy and Lagrangian twin SVR in DCT domain 基于模糊熵和拉格朗日双SVR的DCT域灰度图像水印
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346646
A. Yadav, R. Mehta, Raj Kumar
In this paper, the effect of low, middle and high frequency DCT coefficients are investigated onto gray scale image watermarking in terms of imperceptibility and robustness. The performance of Lagrangian twin support vector regression (LTSVR), which was successfully applied on synthetic datasets obtained from UCI repository for various kinds of regression problems by Balasundaram et al. [9], onto image watermarking problem, is validated by embedding and extracting the watermark on different standard and real world images. Also the good learning capability of image characteristics provides the good imperceptibility of the watermark and robustness against several kinds of image processing attacks verifies the high generalization performance of LTSVR. Through the experimental results, it is observed that significant amount of imperceptibility and robustness is achieved using low frequency (LF) DCT coefficients as compared to middle frequency (MF) and high frequency (HF) DCT coefficients as well as state-of-art technique.
本文研究了低、中、高频DCT系数对灰度图像水印的不可感知性和鲁棒性的影响。Lagrangian twin support vector regression (LTSVR)是Balasundaram等[9]在UCI repository中获得的各种回归问题的合成数据集上成功应用于图像水印问题的方法,通过在不同标准图像和真实世界图像上嵌入和提取水印,验证了LTSVR在图像水印问题上的性能。良好的图像特征学习能力使水印具有良好的不可感知性和对多种图像处理攻击的鲁棒性,验证了LTSVR具有较高的泛化性能。通过实验结果,可以观察到与中频(MF)和高频(HF) DCT系数以及最先进的技术相比,使用低频(LF) DCT系数实现了显著的不可感知性和鲁棒性。
{"title":"Gray scale image watermarking using fuzzy entropy and Lagrangian twin SVR in DCT domain","authors":"A. Yadav, R. Mehta, Raj Kumar","doi":"10.1109/IC3.2015.7346646","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346646","url":null,"abstract":"In this paper, the effect of low, middle and high frequency DCT coefficients are investigated onto gray scale image watermarking in terms of imperceptibility and robustness. The performance of Lagrangian twin support vector regression (LTSVR), which was successfully applied on synthetic datasets obtained from UCI repository for various kinds of regression problems by Balasundaram et al. [9], onto image watermarking problem, is validated by embedding and extracting the watermark on different standard and real world images. Also the good learning capability of image characteristics provides the good imperceptibility of the watermark and robustness against several kinds of image processing attacks verifies the high generalization performance of LTSVR. Through the experimental results, it is observed that significant amount of imperceptibility and robustness is achieved using low frequency (LF) DCT coefficients as compared to middle frequency (MF) and high frequency (HF) DCT coefficients as well as state-of-art technique.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123779486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
User verification using safe handwritten passwords on smartphones 在智能手机上使用安全的手写密码进行用户验证
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346651
T. Kutzner, Fanyu Ye, Ingrid Bönninger, C. Travieso-González, M. Dutta, Anushikha Singh
This article focuses on the writer verification using safe handwritten passwords on smartphones. We extract and select 25 static and dynamic biometric features from a handwritten character password sequence on an android touch-screen device. For the writer verification we use the classification algorithms of WEKA framework. Our 32 test persons wrote generated safe passwords with a length of 8 characters. Each person wrote their password 12 times. The approach works with 384 training samples on a supervised system. The best result of 98.72% success rate for a correct classification, the proposal reached with the KStar and k- Nearest Neighbor classifier after ranking with Fisher Score feature selection. The best result of 10.42% false accepted rate is reached with KStar classifier.
本文主要介绍在智能手机上使用安全的手写密码进行作者验证。我们从android触摸屏设备上的手写字符密码序列中提取并选择了25个静态和动态生物特征。作者验证使用了WEKA框架的分类算法。我们的32名测试人员编写了生成的长度为8个字符的安全密码。每个人都写了12次密码。该方法在一个监督系统上使用384个训练样本。在对Fisher Score特征选择进行排序后,KStar和k-近邻分类器的分类成功率达到了98.72%的最佳结果。KStar分类器的最佳结果为10.42%的误接受率。
{"title":"User verification using safe handwritten passwords on smartphones","authors":"T. Kutzner, Fanyu Ye, Ingrid Bönninger, C. Travieso-González, M. Dutta, Anushikha Singh","doi":"10.1109/IC3.2015.7346651","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346651","url":null,"abstract":"This article focuses on the writer verification using safe handwritten passwords on smartphones. We extract and select 25 static and dynamic biometric features from a handwritten character password sequence on an android touch-screen device. For the writer verification we use the classification algorithms of WEKA framework. Our 32 test persons wrote generated safe passwords with a length of 8 characters. Each person wrote their password 12 times. The approach works with 384 training samples on a supervised system. The best result of 98.72% success rate for a correct classification, the proposal reached with the KStar and k- Nearest Neighbor classifier after ranking with Fisher Score feature selection. The best result of 10.42% false accepted rate is reached with KStar classifier.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126849904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Human identification using Linear Multiclass SVM and Eye Movement biometrics 基于线性多类支持向量机和眼动生物识别的人体识别
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346708
Namrata Srivastava, Utkarsh Agrawal, S. Roy, U. Tiwary
The paper presents a system to accurately differentiate between unique individuals by utilizing the various eye-movement biometric features. Eye Movements are highly resistant to forgery as the generation of eye movements occur due to the involvement of complex neurological interactions and extra ocular muscle properties. We have employed Linear Multiclass SVM model to classify the numerous eye movement features. These features were obtained by making a person fixate on a visual stimuli. The testing was performed using this model and a classification accuracy up to 91% to 100% is obtained on the dataset used. The results are a clear indication that eye-based biometric identification has the potential to become a leading behavioral technique in the future. Moreover, its fusion with different biometric processes such as EEG, Face Recognition etc., can also increase its classification accuracy.
本文提出了一种利用眼球运动的各种生物特征来准确区分个体的系统。眼球运动是高度抗伪造的,因为眼球运动的产生是由于复杂的神经相互作用和额外的眼肌特性的参与。我们采用线性多类支持向量机模型对大量的眼动特征进行分类。这些特征是通过让一个人盯着一个视觉刺激来获得的。使用该模型进行测试,在使用的数据集上获得了高达91%至100%的分类精度。研究结果清楚地表明,基于眼睛的生物识别技术有可能在未来成为一种领先的行为技术。此外,它与不同的生物识别过程如脑电图、人脸识别等融合,也可以提高其分类精度。
{"title":"Human identification using Linear Multiclass SVM and Eye Movement biometrics","authors":"Namrata Srivastava, Utkarsh Agrawal, S. Roy, U. Tiwary","doi":"10.1109/IC3.2015.7346708","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346708","url":null,"abstract":"The paper presents a system to accurately differentiate between unique individuals by utilizing the various eye-movement biometric features. Eye Movements are highly resistant to forgery as the generation of eye movements occur due to the involvement of complex neurological interactions and extra ocular muscle properties. We have employed Linear Multiclass SVM model to classify the numerous eye movement features. These features were obtained by making a person fixate on a visual stimuli. The testing was performed using this model and a classification accuracy up to 91% to 100% is obtained on the dataset used. The results are a clear indication that eye-based biometric identification has the potential to become a leading behavioral technique in the future. Moreover, its fusion with different biometric processes such as EEG, Face Recognition etc., can also increase its classification accuracy.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126852677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Secure data transmission using video 使用视频安全数据传输
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346684
Nikita Lemos, Kavita Sonawane, Bidisha Roy
In today's world transmitting data in a safe and secure fashion is difficult, especially when highly sensitive data is involved. The data should be ideally robust to security threats. The paper aims at proposing a methodology which employs dual level of security using cryptography and steganography to hide the secret text data using video as a cover. A blend of existing and novel techniques is used for hiding the data. Cryptography aims at concealing the secret data and steganography hides the existence of the data. Since a video is addressed as a collection of frames, a frame selection logic is incorporated which inserts data into the frames in a random fashion so that the data cannot be retrieved easily by the attacker. Considering the goal of crypto and steganography the analysis of the methodology can be done on the basis of visual perceptibility, error ratios and histogram comparison of video frames before and after hiding data. Use of two techniques with distinct goals to hide the data increases the robustness as well as simple frame selection logic prevents the attacker to obtain the data easily, but at the same time simplifies the sender's job as a rule index of frames is maintained.
在当今世界,以安全可靠的方式传输数据是困难的,特别是当涉及高度敏感的数据时。理想情况下,数据应该能够抵御安全威胁。本文旨在提出一种采用加密和隐写双重安全级别的方法,以视频为掩护隐藏秘密文本数据。现有技术和新技术的混合用于隐藏数据。密码学的目的是隐藏秘密数据,隐写术的目的是隐藏数据的存在。由于视频是作为一组帧进行寻址的,因此采用了一种帧选择逻辑,它以随机的方式将数据插入帧中,这样攻击者就不能轻易地检索到数据。考虑到加密和隐写的目标,可以根据隐藏数据前后视频帧的视觉可感知性、错误率和直方图比较对方法进行分析。使用两种目标不同的技术来隐藏数据,增加了鲁棒性,同时简单的帧选择逻辑防止攻击者轻易获得数据,同时简化了发送方的工作,因为帧的规则索引被维护。
{"title":"Secure data transmission using video","authors":"Nikita Lemos, Kavita Sonawane, Bidisha Roy","doi":"10.1109/IC3.2015.7346684","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346684","url":null,"abstract":"In today's world transmitting data in a safe and secure fashion is difficult, especially when highly sensitive data is involved. The data should be ideally robust to security threats. The paper aims at proposing a methodology which employs dual level of security using cryptography and steganography to hide the secret text data using video as a cover. A blend of existing and novel techniques is used for hiding the data. Cryptography aims at concealing the secret data and steganography hides the existence of the data. Since a video is addressed as a collection of frames, a frame selection logic is incorporated which inserts data into the frames in a random fashion so that the data cannot be retrieved easily by the attacker. Considering the goal of crypto and steganography the analysis of the methodology can be done on the basis of visual perceptibility, error ratios and histogram comparison of video frames before and after hiding data. Use of two techniques with distinct goals to hide the data increases the robustness as well as simple frame selection logic prevents the attacker to obtain the data easily, but at the same time simplifies the sender's job as a rule index of frames is maintained.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"189 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127640677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
ANPR Indian system using surveillance cameras 印度ANPR系统使用监控摄像头
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346695
A. Singh, Souvik Roy
Number Plate Recognition technique is widely used in identifying vehicle identity across the world where a standard plate size and font are maintained which makes recognition easy. For implementing number plate recognition specifically in India a lot number of issue comes up like hundreds of different forms of fonts being used, size of plate not maintained, five different color number plate, double line number plate etc. All these problems are being taken care while developing a software for Indian number plate recognition and is tested in real Indian road conditions. Support Vector machines are trained & used for detection of number plate contours. ANN is used for character recognition from number plate and various algorithm for plate enhancement, noise reduction and ultimately neural networks are most efficient for result with erasing lot of camera constraints. The ANPR software is designed in C++ using Qt for GUI designing, OpenCV as image processing libraries and SQL as database management thereby making it a complete software implementation of idea.
车牌识别技术在世界范围内广泛应用于车辆身份识别,车牌大小和字体保持统一,便于识别。特别是在印度实施车牌识别时,出现了很多问题,比如使用了数百种不同形式的字体,车牌大小没有保持不变,五种不同颜色的车牌,双线车牌等。在开发印度车牌识别软件的过程中,所有这些问题都得到了重视,并在印度的真实路况中进行了测试。支持向量机经过训练并用于车牌轮廓的检测。人工神经网络用于车牌字符识别,各种算法用于车牌增强、降噪,最终神经网络在消除大量相机约束的结果中是最有效的。ANPR软件是用c++语言设计的,使用Qt进行GUI设计,OpenCV作为图像处理库,SQL作为数据库管理,使其成为一个完整的软件实现思想。
{"title":"ANPR Indian system using surveillance cameras","authors":"A. Singh, Souvik Roy","doi":"10.1109/IC3.2015.7346695","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346695","url":null,"abstract":"Number Plate Recognition technique is widely used in identifying vehicle identity across the world where a standard plate size and font are maintained which makes recognition easy. For implementing number plate recognition specifically in India a lot number of issue comes up like hundreds of different forms of fonts being used, size of plate not maintained, five different color number plate, double line number plate etc. All these problems are being taken care while developing a software for Indian number plate recognition and is tested in real Indian road conditions. Support Vector machines are trained & used for detection of number plate contours. ANN is used for character recognition from number plate and various algorithm for plate enhancement, noise reduction and ultimately neural networks are most efficient for result with erasing lot of camera constraints. The ANPR software is designed in C++ using Qt for GUI designing, OpenCV as image processing libraries and SQL as database management thereby making it a complete software implementation of idea.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129147445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
2015 Eighth International Conference on Contemporary Computing (IC3)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1