首页 > 最新文献

2015 Eighth International Conference on Contemporary Computing (IC3)最新文献

英文 中文
Reconstructing h-convex binary images from its horizontal and vertical projections by simulated annealing 用模拟退火法从h-凸的水平和垂直投影重建h-凸二值图像
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346664
Divyesh Patel, T. Srivastava
The field of Discrete Tomography (DT) deals with the reconstruction of 2D discrete images from a few number of their projections. The ideal problem of DT is to reconstruct a binary image from its horizontal and vertical projections. It turns out that this problem of DT is highly underdetermined and therefore it is inevitable to impose additional constraints to this problem. This paper uses the convexity property of binary images and the problem of reconstruction of h-convex binary images from its horizontal and vertical projections is considered here. This problem is transformed into two different optimization problems by defining two appropriate objective functions. Then two simulated annealing (SA) algorithms to solve the two optimization problems are developed. The SA algorithms are tested on various randomly generated test images. The algorithms are also tested on noisy images. Finally numerical results have been reported showing good reconstruction fidelity.
离散层析成像(DT)领域涉及从少量的投影重建二维离散图像。DT的理想问题是从二值图像的水平投影和垂直投影重建二值图像。事实证明,这个DT问题是高度欠定的,因此不可避免地要对这个问题施加额外的约束。本文利用二值图像的凸性,研究了h-凸二值图像的水平投影和垂直投影重建问题。通过定义两个合适的目标函数,将该问题转化为两个不同的优化问题。然后提出了两种模拟退火(SA)算法来解决这两个优化问题。在各种随机生成的测试图像上对SA算法进行了测试。该算法还在噪声图像上进行了测试。最后,数值结果显示了较好的重建保真度。
{"title":"Reconstructing h-convex binary images from its horizontal and vertical projections by simulated annealing","authors":"Divyesh Patel, T. Srivastava","doi":"10.1109/IC3.2015.7346664","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346664","url":null,"abstract":"The field of Discrete Tomography (DT) deals with the reconstruction of 2D discrete images from a few number of their projections. The ideal problem of DT is to reconstruct a binary image from its horizontal and vertical projections. It turns out that this problem of DT is highly underdetermined and therefore it is inevitable to impose additional constraints to this problem. This paper uses the convexity property of binary images and the problem of reconstruction of h-convex binary images from its horizontal and vertical projections is considered here. This problem is transformed into two different optimization problems by defining two appropriate objective functions. Then two simulated annealing (SA) algorithms to solve the two optimization problems are developed. The SA algorithms are tested on various randomly generated test images. The algorithms are also tested on noisy images. Finally numerical results have been reported showing good reconstruction fidelity.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124597731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Line based extraction of important regions from a cheque image 基于线的支票图像重要区域提取
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346676
Prabhat Dansena, K. P. Kumar, R. Pal
Automatic extraction of important regions from a cheque image helps in automatic analysis of the cheque. It can be used for automated clearing of cheques, detection of frauds in the cheques, and so on. A novel approach of extracting important regions from a cheque image is proposed, in this paper, based on identification of lines. Experimental results demonstrate the success of the proposed approach.
从支票图像中自动提取重要区域有助于对支票进行自动分析。它可以用于自动清算支票,检测支票中的欺诈行为等等。本文提出了一种基于线条识别的支票图像重要区域提取方法。实验结果证明了该方法的有效性。
{"title":"Line based extraction of important regions from a cheque image","authors":"Prabhat Dansena, K. P. Kumar, R. Pal","doi":"10.1109/IC3.2015.7346676","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346676","url":null,"abstract":"Automatic extraction of important regions from a cheque image helps in automatic analysis of the cheque. It can be used for automated clearing of cheques, detection of frauds in the cheques, and so on. A novel approach of extracting important regions from a cheque image is proposed, in this paper, based on identification of lines. Experimental results demonstrate the success of the proposed approach.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122641255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
BNSR: Border Node preferred Social Ranking based Routing Protocol for VANETs 基于边界节点优先社会排名的vanet路由协议
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346743
Bhuvan Mehan, Sanjay Batish, R. Bhatia, A. Dhiman
Border Node preferred Social Ranking based Routing Protocol (BNSR) is present in this paper which is the expansion of the BMFR routing protocol. Routing strategy of BNSR follows the position based routing by using any location services such as GPS system and forwarding strategy follows the prominence of border node based forwarding to shrivel the delay and optimize the path length. BNSR considers the concept of social ranking which is a parameter of CODO (continuous opinion dynamic optimization) technique on which basis the next hop border node is selected. The protocol is simulated with NS2 simulator and results shows the algorithm works well and produces better packet delivery ratio (PDR) and minimum end-to-end delay. When compared with BMFR protocol the consequence of purposed protocol is much better and much efficient in VANETs. We are the first to acquaint the concept of social ranking in selecting the next hop border nodes in the best of our knowledge.
本文提出了基于边界节点优先社会排名的路由协议(BNSR),它是对边界节点优先社会排名路由协议的扩展。BNSR的路由策略遵循基于位置的路由,利用GPS等任意位置服务;转发策略遵循基于边界节点的转发的突出性,缩小时延,优化路径长度。BNSR考虑了社会排名的概念,这是CODO(连续意见动态优化)技术的一个参数,在此基础上选择下一跳边界节点。在NS2仿真器上对该协议进行了仿真,结果表明该算法运行良好,具有更好的分组投递率(PDR)和最小的端到端延迟。与BMFR协议相比,目标协议在VANETs中的效果更好,效率更高。我们是第一个在选择下一跳边界节点时熟悉社会排名概念的人。
{"title":"BNSR: Border Node preferred Social Ranking based Routing Protocol for VANETs","authors":"Bhuvan Mehan, Sanjay Batish, R. Bhatia, A. Dhiman","doi":"10.1109/IC3.2015.7346743","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346743","url":null,"abstract":"Border Node preferred Social Ranking based Routing Protocol (BNSR) is present in this paper which is the expansion of the BMFR routing protocol. Routing strategy of BNSR follows the position based routing by using any location services such as GPS system and forwarding strategy follows the prominence of border node based forwarding to shrivel the delay and optimize the path length. BNSR considers the concept of social ranking which is a parameter of CODO (continuous opinion dynamic optimization) technique on which basis the next hop border node is selected. The protocol is simulated with NS2 simulator and results shows the algorithm works well and produces better packet delivery ratio (PDR) and minimum end-to-end delay. When compared with BMFR protocol the consequence of purposed protocol is much better and much efficient in VANETs. We are the first to acquaint the concept of social ranking in selecting the next hop border nodes in the best of our knowledge.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122413735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An improved approach to English-Hindi based Cross Language Information Retrieval system 基于英语-印地语的跨语言信息检索系统的改进方法
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346706
E. Katta, Anuja Arora
Cross Language Information Retrieval (CLIR) is a sub domain of Information Retrieval. It deals with retrieval of information in a specified language that is different from the language of user's query. In this paper, an improved English-Hindi based CLIR is proposed. There are various un-noticed domains in this broad research area that are required to be worked upon in order to improve the performance of an English-Hindi based CLIR. Not much research effort has been put up to improve the searching and ranking aspects of CLIR systems, especially in case of English-Hindi based CLIR. This paper focuses on applying algorithms like Naïve Bayes and particle swarm optimization in order to improve ranking and searching aspects of a CLIR system. We matched terms contained in documents to the query terms in same sequence as present in the search query to make our system more efficient. Along with this our approach also makes use of bilingual English-Hindi translator for query conversion in Hindi language. Further, we use Hindi query extension and synonym generation which helps in retrieving more relevant results in an English-Hindi based CLIR as compared to existing one. Both of these techniques applied to this improved approach gives user a change to choose more appropriate Hindi query than just by using the single translated query and hence improving overall performance.
跨语言信息检索是信息检索的一个子领域。它处理不同于用户查询语言的指定语言的信息检索。本文提出了一种改进的基于英语-印地语的CLIR。为了提高基于英语-印地语的CLIR的性能,在这个广泛的研究领域中有许多未被注意到的领域需要进行研究。在改进CLIR系统的搜索和排名方面,特别是在基于英语-印地语的CLIR方面,没有太多的研究工作。本文的重点是应用Naïve贝叶斯和粒子群优化等算法来改进CLIR系统的排名和搜索方面。我们将文档中包含的词与搜索查询中的查询词按照相同的顺序进行匹配,以提高系统的效率。除此之外,我们的方法还利用英语-印地语双语翻译器在印地语中进行查询转换。此外,我们使用印地语查询扩展和同义词生成,这有助于在基于英语-印地语的CLIR中检索比现有的CLIR更相关的结果。应用于这种改进方法的这两种技术都为用户提供了选择更合适的印地语查询的机会,而不仅仅是使用单个翻译查询,从而提高了整体性能。
{"title":"An improved approach to English-Hindi based Cross Language Information Retrieval system","authors":"E. Katta, Anuja Arora","doi":"10.1109/IC3.2015.7346706","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346706","url":null,"abstract":"Cross Language Information Retrieval (CLIR) is a sub domain of Information Retrieval. It deals with retrieval of information in a specified language that is different from the language of user's query. In this paper, an improved English-Hindi based CLIR is proposed. There are various un-noticed domains in this broad research area that are required to be worked upon in order to improve the performance of an English-Hindi based CLIR. Not much research effort has been put up to improve the searching and ranking aspects of CLIR systems, especially in case of English-Hindi based CLIR. This paper focuses on applying algorithms like Naïve Bayes and particle swarm optimization in order to improve ranking and searching aspects of a CLIR system. We matched terms contained in documents to the query terms in same sequence as present in the search query to make our system more efficient. Along with this our approach also makes use of bilingual English-Hindi translator for query conversion in Hindi language. Further, we use Hindi query extension and synonym generation which helps in retrieving more relevant results in an English-Hindi based CLIR as compared to existing one. Both of these techniques applied to this improved approach gives user a change to choose more appropriate Hindi query than just by using the single translated query and hence improving overall performance.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"os-12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127760755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
An efficient and modified median root prior based framework for PET/SPECT reconstruction algorithm 基于改进中值根先验的PET/SPECT重构算法框架
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346643
Shailendra Tiwari, R. Srivastava
Bayesian statistical algorithm plays a significant role in the quality of the images produced by Emission Tomography like PET/SPECT, since they can provide an accurate system model. The major drawbacks associated with this algorithm include the problem of slow convergence, choice of optimum initial point and ill-posedness. To address these issues, in this paper a hybrid-cascaded framework for Median Root Prior (MRP) based reconstruction algorithm is proposed. This framework consists of breaking the reconstruction process into two parts viz. primary and secondary. During primary part, simultaneous algebraic reconstruction technique (SART) is applied to overcome the problems of slow convergence and initialization. It provides fast convergence and produce good reconstruction results with lesser number of iterations than other iterative methods. The task of primary part is to provide an enhanced image to secondary part to be used as an initial estimate for reconstruction process. The secondary part is a hybrid combination of two parts namely the reconstruction part and the prior part. The reconstruction is done using Median Root Prior (MRP) while Anisotropic Diffusion (AD) is used as prior to deal with ill-posedness. A comparative analysis of the proposed model with some other standard methods in literature is presented both qualitatively and quantitatively for a simulated phantom and a standard medical image test data. Using cascaded primary and secondary reconstruction steps, yields significant improvements in reconstructed image quality. It also accelerates the convergence and provides enhanced results using the projection data. The obtained result justifies the applicability of the proposed method.
由于贝叶斯统计算法可以提供准确的系统模型,因此在PET/SPECT等发射层析成像产生的图像质量中起着重要作用。该算法的主要缺点是收敛速度慢、最优初始点的选择和不适定性问题。为了解决这些问题,本文提出了一种混合级联框架的基于中值根先验(MRP)的重构算法。该框架包括将重建过程分为主要和次要两部分。在初始阶段,采用同步代数重构技术(SART)克服了算法收敛慢和初始化慢的问题。与其他迭代方法相比,该方法收敛速度快,迭代次数少,重构效果好。初级部分的任务是向次级部分提供增强图像,作为重建过程的初始估计。二次部分是重构部分和前置部分两部分的混合组合。利用中值根先验(MRP)进行重建,利用各向异性扩散(AD)作为先验处理病态性。并对模拟幻影和标准医学图像测试数据,与文献中其他标准方法进行了定性和定量的比较分析。使用级联的初级和次级重建步骤,可显著提高重建图像的质量。它还加速了收敛,并使用投影数据提供了增强的结果。所得结果证明了所提方法的适用性。
{"title":"An efficient and modified median root prior based framework for PET/SPECT reconstruction algorithm","authors":"Shailendra Tiwari, R. Srivastava","doi":"10.1109/IC3.2015.7346643","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346643","url":null,"abstract":"Bayesian statistical algorithm plays a significant role in the quality of the images produced by Emission Tomography like PET/SPECT, since they can provide an accurate system model. The major drawbacks associated with this algorithm include the problem of slow convergence, choice of optimum initial point and ill-posedness. To address these issues, in this paper a hybrid-cascaded framework for Median Root Prior (MRP) based reconstruction algorithm is proposed. This framework consists of breaking the reconstruction process into two parts viz. primary and secondary. During primary part, simultaneous algebraic reconstruction technique (SART) is applied to overcome the problems of slow convergence and initialization. It provides fast convergence and produce good reconstruction results with lesser number of iterations than other iterative methods. The task of primary part is to provide an enhanced image to secondary part to be used as an initial estimate for reconstruction process. The secondary part is a hybrid combination of two parts namely the reconstruction part and the prior part. The reconstruction is done using Median Root Prior (MRP) while Anisotropic Diffusion (AD) is used as prior to deal with ill-posedness. A comparative analysis of the proposed model with some other standard methods in literature is presented both qualitatively and quantitatively for a simulated phantom and a standard medical image test data. Using cascaded primary and secondary reconstruction steps, yields significant improvements in reconstructed image quality. It also accelerates the convergence and provides enhanced results using the projection data. The obtained result justifies the applicability of the proposed method.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133725086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online anomaly detection via class-imbalance learning 基于类不平衡学习的在线异常检测
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346648
Chandresh Kumar Maurya, Durga Toshniwal, G. V. Venkoparao
Anomaly detection is an important task in many real world applications such as fraud detection, suspicious activity detection, health care monitoring etc. In this paper, we tackle this problem from supervised learning perspective in online learning setting. We maximize well known Gmean metric for class-imbalance learning in online learning framework. Specifically, we show that maximizing Gmean is equivalent to minimizing a convex surrogate loss function and based on that we propose novel online learning algorithm for anomaly detection. We then show, by extensive experiments, that the performance of the proposed algorithm with respect to sum metric is as good as a recently proposed Cost-Sensitive Online Classification(CSOC) algorithm for class-imbalance learning over various benchmarked data sets while keeping running time close to the perception algorithm. Our another conclusion is that other competitive online algorithms do not perform consistently over data sets of varying size. This shows the potential applicability of our proposed approach.
异常检测在许多现实应用中是一项重要的任务,如欺诈检测、可疑活动检测、医疗监控等。在本文中,我们从在线学习环境中监督学习的角度来解决这个问题。我们在在线学习框架中最大化了班失衡学习的Gmean度量。具体来说,我们证明最大化Gmean等同于最小化凸代理损失函数,并在此基础上提出了一种新的异常检测在线学习算法。然后,我们通过大量的实验表明,所提出的算法在求和度量方面的性能与最近提出的成本敏感在线分类(CSOC)算法一样好,用于在各种基准数据集上进行类不平衡学习,同时保持运行时间接近感知算法。我们的另一个结论是,其他有竞争力的在线算法在不同规模的数据集上表现不一致。这显示了我们提出的方法的潜在适用性。
{"title":"Online anomaly detection via class-imbalance learning","authors":"Chandresh Kumar Maurya, Durga Toshniwal, G. V. Venkoparao","doi":"10.1109/IC3.2015.7346648","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346648","url":null,"abstract":"Anomaly detection is an important task in many real world applications such as fraud detection, suspicious activity detection, health care monitoring etc. In this paper, we tackle this problem from supervised learning perspective in online learning setting. We maximize well known Gmean metric for class-imbalance learning in online learning framework. Specifically, we show that maximizing Gmean is equivalent to minimizing a convex surrogate loss function and based on that we propose novel online learning algorithm for anomaly detection. We then show, by extensive experiments, that the performance of the proposed algorithm with respect to sum metric is as good as a recently proposed Cost-Sensitive Online Classification(CSOC) algorithm for class-imbalance learning over various benchmarked data sets while keeping running time close to the perception algorithm. Our another conclusion is that other competitive online algorithms do not perform consistently over data sets of varying size. This shows the potential applicability of our proposed approach.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132915173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
An efficient undeniable signature scheme using braid groups 一种有效的基于辫组的不可否认签名方案
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346736
Pratik Ranjan, H. Om
The signature schemes are used to verify the authenticity of a signature and the corresponding documents. The undeniable signature schemes are challenge and response based interactive schemes, where the active participation of signer is compulsory. These schemes are used in private communication where the confidential deals and agreements take place as a legitimate signer cannot deny his signature. In this paper, we analyze the Thomas and Lal's braid group based zero-knowledge undeniable signature scheme and show that it is insecure against the man-in-the-middle and impersonation attacks. In addition, we propose an efficient undeniable signature scheme using the braid groups that provides secrecy and authenticity of a legitimate signer. Furthermore, we show that our scheme is secure against the above mentioned attacks.
签名方案用于验证签名和相应文档的真实性。不可否认签名方案是基于挑战和响应的交互式签名方案,签名者必须积极参与。这些方案用于私人通信,其中机密交易和协议发生,因为合法签名者不能否认他的签名。本文分析了基于Thomas和Lal的辫群的零知识不可否认签名方案,证明了该方案对中间人攻击和冒充攻击是不安全的。此外,我们还提出了一种有效的不可否认签名方案,该方案使用辫组来提供合法签名者的保密性和真实性。此外,我们证明了我们的方案对上述攻击是安全的。
{"title":"An efficient undeniable signature scheme using braid groups","authors":"Pratik Ranjan, H. Om","doi":"10.1109/IC3.2015.7346736","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346736","url":null,"abstract":"The signature schemes are used to verify the authenticity of a signature and the corresponding documents. The undeniable signature schemes are challenge and response based interactive schemes, where the active participation of signer is compulsory. These schemes are used in private communication where the confidential deals and agreements take place as a legitimate signer cannot deny his signature. In this paper, we analyze the Thomas and Lal's braid group based zero-knowledge undeniable signature scheme and show that it is insecure against the man-in-the-middle and impersonation attacks. In addition, we propose an efficient undeniable signature scheme using the braid groups that provides secrecy and authenticity of a legitimate signer. Furthermore, we show that our scheme is secure against the above mentioned attacks.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132956689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis and modification of spectral energy for neutral to sad emotion conversion 中性情绪向悲伤情绪转换的谱能分析与修正
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346690
Arijul Haque, K. S. Rao
This work explores the spectral energies of neutral, sad and angry speech, and analyzes the potential of spectral energy modification to convert neutral speech to sad/angry speech. A method of modifying the spectral energy of neutral speech signals based on a filter bank implementation is proposed for the purpose of converting a given neutral speech to a target emotional speech. Since pitch plays a vital role in emotion expression, we modify the pitch contour first by using the method of Gaussian normalization. This is followed by modification of spectral energy using a method proposed in this paper. The expressiveness of the resultant speech is compared with speech obtained by modifying only the pitch contour, and we have observed improvements in expressiveness due to incorporation of proposed spectral energy modification. The method is found to be quite good for neutral to sad conversion. However, the quality of conversion to anger is not good, and the reasons behind this are analyzed.
本研究探讨了中性言语、悲伤言语和愤怒言语的频谱能量,并分析了将中性言语转化为悲伤/愤怒言语的频谱能量修饰的潜力。提出了一种基于滤波器组实现的中性语音信号的频谱能量修改方法,目的是将给定的中性语音转换为目标情感语音。由于音高在情绪表达中起着至关重要的作用,我们首先使用高斯归一化的方法来修改音高轮廓。然后利用本文提出的方法对光谱能量进行修正。将所得语音的表达性与仅修改音高轮廓获得的语音进行比较,我们观察到由于纳入了所提出的频谱能量修改,表达性得到了改善。该方法被发现是相当好的中性到悲伤的转换。然而,转化为愤怒的质量并不好,并分析其背后的原因。
{"title":"Analysis and modification of spectral energy for neutral to sad emotion conversion","authors":"Arijul Haque, K. S. Rao","doi":"10.1109/IC3.2015.7346690","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346690","url":null,"abstract":"This work explores the spectral energies of neutral, sad and angry speech, and analyzes the potential of spectral energy modification to convert neutral speech to sad/angry speech. A method of modifying the spectral energy of neutral speech signals based on a filter bank implementation is proposed for the purpose of converting a given neutral speech to a target emotional speech. Since pitch plays a vital role in emotion expression, we modify the pitch contour first by using the method of Gaussian normalization. This is followed by modification of spectral energy using a method proposed in this paper. The expressiveness of the resultant speech is compared with speech obtained by modifying only the pitch contour, and we have observed improvements in expressiveness due to incorporation of proposed spectral energy modification. The method is found to be quite good for neutral to sad conversion. However, the quality of conversion to anger is not good, and the reasons behind this are analyzed.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122919079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Dynamic facial emotion recognition from 4D video sequences 动态面部情感识别从4D视频序列
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346705
P. Suja, P. KalyanKumarV., Shikha Tripathi
Emotions are characterized as responses to internal and external events of a person. Emotion recognition through facial expressions from videos plays a vital role in human computer interaction where the dynamic changes in face movements needs to be realized quickly. In this work, we propose a simple method, using the geometrical based approach for the recognition of six basic emotions in video sequences of BU-4DFE database. We have chosen optimum feature points out of the 83 feature points provided in the BU-4DFE database. A video expressing emotion will have frames containing neutral, onset, apex and offset of that emotion. We have dynamically identified the frame that is most expressive for an emotion (apex). The Euclidean distance between the feature points in apex and neutral frame is determined and their difference in corresponding neutral and the apex frame is calculated to form the feature vector. The feature vectors thus formed for all the emotions and subjects are given to Neural Networks (NN) and Support Vector Machine (SVM) with different kernels for classification. We have compared the accuracy obtained by NN & SVM. Our proposed method is simple, uses only two frames and yields good accuracy for BU-4DFE database. Very complex algorithms exist in literature using BU-4DFE database and our proposed simple method gives comparable results. It can be applied for real time implementation and kinesics in future.
情绪的特点是对一个人的内部和外部事件的反应。通过视频中的面部表情进行情感识别在人机交互中起着至关重要的作用,需要快速实现面部动作的动态变化。在这项工作中,我们提出了一种简单的方法,使用基于几何的方法来识别BU-4DFE数据库视频序列中的六种基本情绪。我们从BU-4DFE数据库提供的83个特征点中选择了最优特征点。一个表达情感的视频会有包含中性、开始、顶点和偏移的帧。我们已经动态地确定了最能表达情感的框架(顶点)。确定顶点框架和顶点框架中特征点之间的欧氏距离,并计算其在相应的顶点框架和顶点框架中的差值,形成特征向量。将由此形成的所有情绪和主题的特征向量分别交给具有不同核的神经网络(NN)和支持向量机(SVM)进行分类。比较了神经网络和支持向量机的准确率。该方法简单,仅使用两帧,对BU-4DFE数据库具有较好的精度。文献中使用BU-4DFE数据库的算法非常复杂,我们提出的简单方法可以得到相当的结果。它可以应用于未来的实时执行和动力学。
{"title":"Dynamic facial emotion recognition from 4D video sequences","authors":"P. Suja, P. KalyanKumarV., Shikha Tripathi","doi":"10.1109/IC3.2015.7346705","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346705","url":null,"abstract":"Emotions are characterized as responses to internal and external events of a person. Emotion recognition through facial expressions from videos plays a vital role in human computer interaction where the dynamic changes in face movements needs to be realized quickly. In this work, we propose a simple method, using the geometrical based approach for the recognition of six basic emotions in video sequences of BU-4DFE database. We have chosen optimum feature points out of the 83 feature points provided in the BU-4DFE database. A video expressing emotion will have frames containing neutral, onset, apex and offset of that emotion. We have dynamically identified the frame that is most expressive for an emotion (apex). The Euclidean distance between the feature points in apex and neutral frame is determined and their difference in corresponding neutral and the apex frame is calculated to form the feature vector. The feature vectors thus formed for all the emotions and subjects are given to Neural Networks (NN) and Support Vector Machine (SVM) with different kernels for classification. We have compared the accuracy obtained by NN & SVM. Our proposed method is simple, uses only two frames and yields good accuracy for BU-4DFE database. Very complex algorithms exist in literature using BU-4DFE database and our proposed simple method gives comparable results. It can be applied for real time implementation and kinesics in future.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115173525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Leveraging probabilistic segmentation to document clustering 利用概率分割来进行文档聚类
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346657
Arko Banerjee
In this paper a novel approach to document clustering has been introduced by defining a representative-based document similarity model that performs probabilistic segmentation of documents into chunks. The frequently occuring chunks that are considered as representatives of the document set, may represent phrases or stem of true words. The representative based document similarity model, containing a term-document matrix with respect to the representatives, is a compact representation of the vector space model that improves quality of document clustering over traditional methods.
本文引入了一种新的文档聚类方法,通过定义一个基于代表性的文档相似度模型,将文档概率分割成块。经常出现的块被认为是文档集的代表,可以代表真实单词的短语或词干。基于代表的文档相似度模型,包含一个相对于代表的术语-文档矩阵,是向量空间模型的紧凑表示,它比传统方法提高了文档聚类的质量。
{"title":"Leveraging probabilistic segmentation to document clustering","authors":"Arko Banerjee","doi":"10.1109/IC3.2015.7346657","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346657","url":null,"abstract":"In this paper a novel approach to document clustering has been introduced by defining a representative-based document similarity model that performs probabilistic segmentation of documents into chunks. The frequently occuring chunks that are considered as representatives of the document set, may represent phrases or stem of true words. The representative based document similarity model, containing a term-document matrix with respect to the representatives, is a compact representation of the vector space model that improves quality of document clustering over traditional methods.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134552491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2015 Eighth International Conference on Contemporary Computing (IC3)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1