首页 > 最新文献

2014 IEEE International Conference on Computational Intelligence and Computing Research最新文献

英文 中文
EEG features extraction using PCA plus LDA approach based on L1-norm for motor imaginary classification 基于l1范数的PCA + LDA方法提取脑电特征用于运动虚分类
Surendra Gupta, Hema Saini
Brain-Computer Interfaces (BCIs) are communication systems, in which users use their brain activity instead of original motor movements, to produce signals related to specific intention, which in turn are used to control computers or communication devices attached to it. These activities are generally measured by Electroencephalography (EEG). BCI uses pattern recognition approach in which features are extracted from EEG signals which are used to identify the user's mental state. BCI commonly used feature extraction method is Common Spatial Pattern (CSP). Despite of its effective usefulness, it suffers from intrinsic variations and nonstationarity of EEG data as CSP ignores the within class dissimilarities. Also, the formulation of CSP criteria is based on variance using L2-norm, which makes it sensitive to outliers too. A new PCA plus LDA method based on L1-norm has been proposed alternative to CSP which efficiently considers between the classes and within the class dissimilarities. Also the objective function is reformulated using L1-norm to suppress the effect of outliers. The optimal spatial pattern of given method are obtained by introducing an iterative algorithm. The proposed method was evaluated against Dataset IIa of BCI Competition IV. The result showed that the proposed method outperformed in almost all the cases with low mis-classification rate and results in average kappa value 0.3482.
脑机接口(bci)是一种通信系统,在这种系统中,用户使用他们的大脑活动而不是原始的运动来产生与特定意图相关的信号,这些信号反过来被用来控制计算机或与其相连的通信设备。这些活动通常通过脑电图(EEG)来测量。脑机接口采用模式识别方法,从脑电图信号中提取特征来识别用户的精神状态。脑机接口常用的特征提取方法是公共空间模式(CSP)。CSP方法虽然有效,但由于忽略了类内差异,存在脑电数据的内在变异性和非平稳性。此外,CSP标准的制定是基于l2 -范数的方差,这使得它对异常值也很敏感。提出了一种新的基于l1范数的PCA + LDA方法,可以有效地考虑类间和类内的差异。并利用l1范数对目标函数进行了重新表述,以抑制异常值的影响。通过引入迭代算法,得到了该方法的最优空间格局。在BCI Competition IV的数据集IIa上对所提出的方法进行了评估,结果表明,所提出的方法在几乎所有情况下都表现优异,误分类率低,平均kappa值为0.3482。
{"title":"EEG features extraction using PCA plus LDA approach based on L1-norm for motor imaginary classification","authors":"Surendra Gupta, Hema Saini","doi":"10.1109/ICCIC.2014.7238424","DOIUrl":"https://doi.org/10.1109/ICCIC.2014.7238424","url":null,"abstract":"Brain-Computer Interfaces (BCIs) are communication systems, in which users use their brain activity instead of original motor movements, to produce signals related to specific intention, which in turn are used to control computers or communication devices attached to it. These activities are generally measured by Electroencephalography (EEG). BCI uses pattern recognition approach in which features are extracted from EEG signals which are used to identify the user's mental state. BCI commonly used feature extraction method is Common Spatial Pattern (CSP). Despite of its effective usefulness, it suffers from intrinsic variations and nonstationarity of EEG data as CSP ignores the within class dissimilarities. Also, the formulation of CSP criteria is based on variance using L2-norm, which makes it sensitive to outliers too. A new PCA plus LDA method based on L1-norm has been proposed alternative to CSP which efficiently considers between the classes and within the class dissimilarities. Also the objective function is reformulated using L1-norm to suppress the effect of outliers. The optimal spatial pattern of given method are obtained by introducing an iterative algorithm. The proposed method was evaluated against Dataset IIa of BCI Competition IV. The result showed that the proposed method outperformed in almost all the cases with low mis-classification rate and results in average kappa value 0.3482.","PeriodicalId":187874,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Computing Research","volume":"13 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127017288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Efficient analysis of pharmaceutical compound structure based on pattern matching algorithm in data mining techniques 数据挖掘技术中基于模式匹配算法的药物化合物结构高效分析
V. Palanisamy, A. Kumarkombaiya
The proposed methodology involved to finding the chain details of pharmaceutical compound by retrieving the data in numeric format which is taken from functional group. The data mining technique of Enhanced Knuth-Morris-Pratt algorithm used to readily implement to identify the pattern of chemical compounds as in string data based on functional groups connected to one another which is similar from the numeric data.
所提出的方法涉及通过从官能团中检索数字格式的数据来查找药物化合物的链细节。利用增强型Knuth-Morris-Pratt算法的数据挖掘技术,可以很容易地实现基于相互连接的官能团的字符串数据中化合物的模式识别,这与数字数据相似。
{"title":"Efficient analysis of pharmaceutical compound structure based on pattern matching algorithm in data mining techniques","authors":"V. Palanisamy, A. Kumarkombaiya","doi":"10.1109/ICCIC.2014.7238456","DOIUrl":"https://doi.org/10.1109/ICCIC.2014.7238456","url":null,"abstract":"The proposed methodology involved to finding the chain details of pharmaceutical compound by retrieving the data in numeric format which is taken from functional group. The data mining technique of Enhanced Knuth-Morris-Pratt algorithm used to readily implement to identify the pattern of chemical compounds as in string data based on functional groups connected to one another which is similar from the numeric data.","PeriodicalId":187874,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Computing Research","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129301212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A novel approach to handle TCP connections for LAN in mobile vehicle 一种处理移动车辆局域网TCP连接的新方法
Upasana Trivedi, N. Dutta
The Transmission Control Protocol (TCP) is most important transport layer protocol for the internet access over TCP/IP. The TCP protocol was mainly designed for wired network and it assumes that the packet loss in the network occurs mainly due to congestion only. However, in wireless environment the scenario is somewhat different. As compared to wired link, wireless links are slower and less reliable. Furthermore, characteristic of the wireless network, such as user mobility plays another significant role in packet loss. The loss of packets due to link failure or user movement in wireless part is much more higher compared to loss due to congestion. In this research, we aimed at designing a strategy by extending regular TCP in order to minimize the packet loss and increase throughput to end users. We are specifically targeting a LAN connected users in a moving train and trying to provide reliable connection over TCP session. Few modifications to regular TCP and some new functional components like Mobile TCP Agent (MTA) is proposed to handle TCP session on the train.
传输控制协议(TCP)是TCP/IP网络访问中最重要的传输层协议。TCP协议主要是为有线网络设计的,它假设网络中的丢包主要是由于拥塞而发生的。然而,在无线环境中,情况有些不同。与有线链路相比,无线链路速度较慢,可靠性也较差。此外,无线网络的特性,如用户的移动性对丢包也起着重要的作用。在无线部分中,由于链路故障或用户移动造成的数据包丢失要比由于拥塞造成的数据包丢失高得多。在本研究中,我们旨在通过扩展常规TCP来设计一种策略,以减少数据包丢失并增加最终用户的吞吐量。我们专门针对移动列车中的LAN连接用户,并尝试通过TCP会话提供可靠的连接。对常规TCP进行了少量修改,并提出了一些新的功能组件,如移动TCP代理(MTA)来处理列车上的TCP会话。
{"title":"A novel approach to handle TCP connections for LAN in mobile vehicle","authors":"Upasana Trivedi, N. Dutta","doi":"10.1109/ICCIC.2014.7238410","DOIUrl":"https://doi.org/10.1109/ICCIC.2014.7238410","url":null,"abstract":"The Transmission Control Protocol (TCP) is most important transport layer protocol for the internet access over TCP/IP. The TCP protocol was mainly designed for wired network and it assumes that the packet loss in the network occurs mainly due to congestion only. However, in wireless environment the scenario is somewhat different. As compared to wired link, wireless links are slower and less reliable. Furthermore, characteristic of the wireless network, such as user mobility plays another significant role in packet loss. The loss of packets due to link failure or user movement in wireless part is much more higher compared to loss due to congestion. In this research, we aimed at designing a strategy by extending regular TCP in order to minimize the packet loss and increase throughput to end users. We are specifically targeting a LAN connected users in a moving train and trying to provide reliable connection over TCP session. Few modifications to regular TCP and some new functional components like Mobile TCP Agent (MTA) is proposed to handle TCP session on the train.","PeriodicalId":187874,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Computing Research","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130585924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Congestion management of deregulated electricity market using locational marginal pricing 基于区位边际定价的解除管制电力市场拥堵管理
T. Mohanapriya, T. Manikandan
In a deregulated power industry, estimation of power price and management of congestion is a major issues handled by market members. Modeling of sensible pricing structure of power systems is important to provide financial signals for electrical utilities. Locational Marginal Pricing technique is used to determine the energy price for transacted power and to manage the system congestion. In this paper Lossless Direct Current Optimal Power Flow is used to find the LMP value at each bus. Resulting optimization problem is solved by Linear Programming. Variation of LMP values with transmission constraint is studied in this paper. Simulation is carried out on IEEE 14 bus test system and obtained result gives the electricity value at each location.
在解除管制的电力行业中,电价估算和拥堵管理是市场成员处理的主要问题。电力系统合理定价结构的建模对于向电力公司提供财务信号具有重要意义。边际定价技术用于确定交易电力的能源价格和管理系统拥塞。本文采用无损直流最优潮流法求出各母线处的LMP值。最后用线性规划方法求解优化问题。研究了LMP值随传输约束的变化规律。在ieee14总线测试系统上进行了仿真,得到了各位置的电导值。
{"title":"Congestion management of deregulated electricity market using locational marginal pricing","authors":"T. Mohanapriya, T. Manikandan","doi":"10.1109/ICCIC.2014.7238526","DOIUrl":"https://doi.org/10.1109/ICCIC.2014.7238526","url":null,"abstract":"In a deregulated power industry, estimation of power price and management of congestion is a major issues handled by market members. Modeling of sensible pricing structure of power systems is important to provide financial signals for electrical utilities. Locational Marginal Pricing technique is used to determine the energy price for transacted power and to manage the system congestion. In this paper Lossless Direct Current Optimal Power Flow is used to find the LMP value at each bus. Resulting optimization problem is solved by Linear Programming. Variation of LMP values with transmission constraint is studied in this paper. Simulation is carried out on IEEE 14 bus test system and obtained result gives the electricity value at each location.","PeriodicalId":187874,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Computing Research","volume":"34 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123366246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Risk factor analysis of patient based on adaptive neuro fuzzy interface system 基于自适应神经模糊接口系统的患者危险因素分析
M. Mayilvaganan, K. Rajeswari
The proposed methodology involved in this paper, is to diagnosis and analysis the health risk factor which is related to Blood Pressure, Pulse rate and Kidney function by Glomerular Filtration Rate (GFR). The computing techniques can handle two most predominant values such as `True' or `False', `1' or `0', `Black' or `White', but Fuzzy Logic, also handle grey values which occur in between `Black' and `White'. The system consists of 234 combination input fields and one output field. This work focus about Adaptive Neuro Fuzzy Interface System (ANFIS) depends on fuzzy logic controller to diagnose the various level of health risk factor value which is aggregated with Blood Pressure, Pulse Rate and Kidney function based on various Input Parameters. In this paper, Fuzzy Logic circuit was developed with 2's Complement in full adder using the input such as Blood Pressure value taken from Systolic and Diastolic value, Pulse Rate and GFR value. Using the OR gate value, Pulse rate and Blood pressure value are compared with Kidney function and getting the output as risk factor value in efficient manner. The input rule based classifier membership functions are X0, X1, X2. Xn for blood pressure values such as Low, Normal, Very Low, Extreme Low Meds, Very Danger Low, Danger too Low BP, Border Line, Very Danger High Blood pressure etc and the output classifier membership function are Y0, Y1, Y2. Yn for risk factor values such as Low, High and Normal. The proposed ANFIS system is validated with blood pressure data set values using Mat Lab Fuzzy Tool Box, and simulated output analyse the risk factor value of a human being.
本文提出的方法是通过肾小球滤过率(Glomerular Filtration rate, GFR)来诊断和分析与血压、脉搏率和肾功能相关的健康危险因素。计算技术可以处理两个最主要的值,如“真”或“假”,“1”或“0”,“黑”或“白”,但模糊逻辑也处理发生在“黑”和“白”之间的灰色值。系统由234个组合输入字段和1个输出字段组成。本文研究的是基于模糊逻辑控制器的自适应神经模糊接口系统(ANFIS),该系统基于不同的输入参数,对血压、脉搏率、肾功能等健康危险因素进行综合诊断。本文以收缩压和舒张压的血压值、脉搏率和GFR值为输入,采用2's补体全加法器开发了模糊逻辑电路。利用OR门值,将脉搏率和血压值与肾功能进行比较,并有效地得到作为危险因素值的输出。基于输入规则的分类器隶属函数是X0, X1, X2。Xn表示血压值,如低、正常、极低、极低、非常危险低、危险过低血压、边界线、非常危险高血压等,输出分类器隶属函数为Y0、Y1、Y2。Yn表示风险因素值,如Low, High和Normal。利用Mat Lab模糊工具箱对所提出的ANFIS系统进行了血压数据集值的验证,并模拟输出分析了人的危险因素值。
{"title":"Risk factor analysis of patient based on adaptive neuro fuzzy interface system","authors":"M. Mayilvaganan, K. Rajeswari","doi":"10.1109/ICCIC.2014.7238348","DOIUrl":"https://doi.org/10.1109/ICCIC.2014.7238348","url":null,"abstract":"The proposed methodology involved in this paper, is to diagnosis and analysis the health risk factor which is related to Blood Pressure, Pulse rate and Kidney function by Glomerular Filtration Rate (GFR). The computing techniques can handle two most predominant values such as `True' or `False', `1' or `0', `Black' or `White', but Fuzzy Logic, also handle grey values which occur in between `Black' and `White'. The system consists of 234 combination input fields and one output field. This work focus about Adaptive Neuro Fuzzy Interface System (ANFIS) depends on fuzzy logic controller to diagnose the various level of health risk factor value which is aggregated with Blood Pressure, Pulse Rate and Kidney function based on various Input Parameters. In this paper, Fuzzy Logic circuit was developed with 2's Complement in full adder using the input such as Blood Pressure value taken from Systolic and Diastolic value, Pulse Rate and GFR value. Using the OR gate value, Pulse rate and Blood pressure value are compared with Kidney function and getting the output as risk factor value in efficient manner. The input rule based classifier membership functions are X0, X1, X2. Xn for blood pressure values such as Low, Normal, Very Low, Extreme Low Meds, Very Danger Low, Danger too Low BP, Border Line, Very Danger High Blood pressure etc and the output classifier membership function are Y0, Y1, Y2. Yn for risk factor values such as Low, High and Normal. The proposed ANFIS system is validated with blood pressure data set values using Mat Lab Fuzzy Tool Box, and simulated output analyse the risk factor value of a human being.","PeriodicalId":187874,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Computing Research","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121179849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed noun attribute based on its first appearance for text document clustering 基于分布式名词属性首次出现的文本文档聚类
S. Vijayalakshmi, D. Manimegalai
Selection of attributes plays a vital role to improve the quality of clustering. We present a comparative study on three attribute selection techniques and it reveals unattempt combinations, and provides guidelines in selecting attributes. It is occasionally studied in unsupervised learning; however it has been extensively explored in supervised learning. The suggested framework is primarily concerned with the problem of determining and selecting key distributional noun attributes, which are nominated by ranking the attributes according to the importance measure scores from the original noun attributes without class information. Experimental results on Reuter, 20 Newsgroup, WebKB and SCJC (Specific Crime Judgment Corpus) datasets indicate that algorithm with different scores in the context are able to identify the important attributes.
属性的选择对提高聚类质量起着至关重要的作用。我们对三种属性选择技术进行了比较研究,揭示了未经尝试的组合,并为选择属性提供了指导。它偶尔在无监督学习中被研究;然而,它在监督学习中得到了广泛的探索。该框架主要关注关键分布名词属性的确定和选择问题,在没有类别信息的情况下,根据原始名词属性的重要性度量分数对属性进行排序,从而提名关键分布名词属性。在reuters、20 Newsgroup、WebKB和SCJC (Specific Crime Judgment Corpus)数据集上的实验结果表明,上下文中不同分数的算法能够识别出重要属性。
{"title":"Distributed noun attribute based on its first appearance for text document clustering","authors":"S. Vijayalakshmi, D. Manimegalai","doi":"10.1109/ICCIC.2014.7238544","DOIUrl":"https://doi.org/10.1109/ICCIC.2014.7238544","url":null,"abstract":"Selection of attributes plays a vital role to improve the quality of clustering. We present a comparative study on three attribute selection techniques and it reveals unattempt combinations, and provides guidelines in selecting attributes. It is occasionally studied in unsupervised learning; however it has been extensively explored in supervised learning. The suggested framework is primarily concerned with the problem of determining and selecting key distributional noun attributes, which are nominated by ranking the attributes according to the importance measure scores from the original noun attributes without class information. Experimental results on Reuter, 20 Newsgroup, WebKB and SCJC (Specific Crime Judgment Corpus) datasets indicate that algorithm with different scores in the context are able to identify the important attributes.","PeriodicalId":187874,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Computing Research","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121585547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transmission constrained economic load dispatch using biogeography based optimization 基于生物地理优化的输电约束经济负荷调度
Jitendra Singh, S. Goyal
In present days, power crisis increasing due to increase the consumer's load. To overcome this problem some optimization techniques have been used to solve the economic load dispatch problem. This paper presents an efficient and reliable Biogeography Based Optimization (BBO) algorithm, which is used to solve the Economic Load Dispatch problem of thermal power station even as generator and transmission constraints are considered and satisfying it. Biogeography basically is the study of the geographical distribution of the biological organism. Biogeography based optimization is a comparatively new approach. Mathematical models of biogeography explain how an organism arises and how to migrate from one habitat to another habitat, or get died out. In BBO algorithm, solutions are sharing the good features between solutions that are immigration and emigration process. This algorithm looks for the overall optimum solution mostly through two steps - Migration and Mutation. The Results of the proposed method have been compared with results of IEEE 30-bus, 6 generator system and got the better quality of the obtained solution. This method is one of the prominent approaches for solving the Economic Load Dispatch problems under practical conditions.
目前,由于用户负荷的增加,电力危机日益严重。为了克服这一问题,一些优化技术被用于解决经济负荷调度问题。本文提出了一种高效、可靠的基于生物地理的优化算法(BBO),用于在考虑发电机和输电约束并满足约束的情况下求解火电厂的经济负荷调度问题。生物地理学基本上是研究生物有机体的地理分布。基于生物地理学的优化是一种比较新的方法。生物地理学的数学模型解释了生物是如何产生的,以及如何从一个栖息地迁移到另一个栖息地,或者如何灭绝。在BBO算法中,解决方案在迁移过程和迁移过程之间共享良好的特征。该算法主要通过迁移和变异两个步骤寻找整体最优解。将所提方法的结果与IEEE 30总线、6发电机系统的结果进行了比较,得到的解质量更好。该方法是解决实际条件下经济负荷调度问题的重要方法之一。
{"title":"Transmission constrained economic load dispatch using biogeography based optimization","authors":"Jitendra Singh, S. Goyal","doi":"10.1109/ICCIC.2014.7238511","DOIUrl":"https://doi.org/10.1109/ICCIC.2014.7238511","url":null,"abstract":"In present days, power crisis increasing due to increase the consumer's load. To overcome this problem some optimization techniques have been used to solve the economic load dispatch problem. This paper presents an efficient and reliable Biogeography Based Optimization (BBO) algorithm, which is used to solve the Economic Load Dispatch problem of thermal power station even as generator and transmission constraints are considered and satisfying it. Biogeography basically is the study of the geographical distribution of the biological organism. Biogeography based optimization is a comparatively new approach. Mathematical models of biogeography explain how an organism arises and how to migrate from one habitat to another habitat, or get died out. In BBO algorithm, solutions are sharing the good features between solutions that are immigration and emigration process. This algorithm looks for the overall optimum solution mostly through two steps - Migration and Mutation. The Results of the proposed method have been compared with results of IEEE 30-bus, 6 generator system and got the better quality of the obtained solution. This method is one of the prominent approaches for solving the Economic Load Dispatch problems under practical conditions.","PeriodicalId":187874,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Computing Research","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114877371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Impact of code rate and improvisation of the reconstructed image for CODEC 码率和重构图像的即时性对编解码器的影响
S. V. Viraktamath, Satish Rachayya Hiremath, G. V. Attimarad
Present day modern hand held communicating devices rely on forward error correction techniques for their proper functioning. Most digital communication systems nowadays convolutionally encode the transmitted data to compensate for Additive White Gaussian Noise (AWGN), fading of the channel, quantization distortions and other data degradation effects. For its efficient performance the Viterbi algorithm has proven to be a very practical algorithm for forward error correction of convolutionally encoded messages. Convolutional coding and Viterbi decoding is one of the forward error correction technique used in most of the communication applications. This paper investigates the impact of code rates on the performance of hard decision Viterbi decoder for image transmission applications. In this paper different code rates such as 1/2, 2/3 and 3/5 have been simulated using different constraint lengths as well as different generator polynomials for the image input. For the lesser BER reconstructed image can be processed to get lesser MSE. All the simulations are conducted in MATLAB over AWGN channel.
现在,现代手持通信设备依靠前向纠错技术来正常工作。目前大多数数字通信系统对传输数据进行卷积编码,以补偿加性高斯白噪声(AWGN)、信道衰落、量化失真和其他数据退化效应。由于其高效的性能,Viterbi算法已被证明是一种非常实用的卷积编码信息前向纠错算法。卷积编码和维特比译码是目前大多数通信应用中常用的前向纠错技术之一。研究了码率对图像传输硬决策维特比解码器性能的影响。本文模拟了不同的码率,如1/2,2/3和3/5使用不同的约束长度和不同的生成器多项式的图像输入。对于较小的误码率,可以对重构图像进行处理,得到较小的MSE。所有仿真均在MATLAB中通过AWGN信道进行。
{"title":"Impact of code rate and improvisation of the reconstructed image for CODEC","authors":"S. V. Viraktamath, Satish Rachayya Hiremath, G. V. Attimarad","doi":"10.1109/ICCIC.2014.7238370","DOIUrl":"https://doi.org/10.1109/ICCIC.2014.7238370","url":null,"abstract":"Present day modern hand held communicating devices rely on forward error correction techniques for their proper functioning. Most digital communication systems nowadays convolutionally encode the transmitted data to compensate for Additive White Gaussian Noise (AWGN), fading of the channel, quantization distortions and other data degradation effects. For its efficient performance the Viterbi algorithm has proven to be a very practical algorithm for forward error correction of convolutionally encoded messages. Convolutional coding and Viterbi decoding is one of the forward error correction technique used in most of the communication applications. This paper investigates the impact of code rates on the performance of hard decision Viterbi decoder for image transmission applications. In this paper different code rates such as 1/2, 2/3 and 3/5 have been simulated using different constraint lengths as well as different generator polynomials for the image input. For the lesser BER reconstructed image can be processed to get lesser MSE. All the simulations are conducted in MATLAB over AWGN channel.","PeriodicalId":187874,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Computing Research","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115918196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An enhanced feature selection method comprising rough set and clustering techniques 一种包含粗糙集和聚类技术的增强特征选择方法
A. Murugan, T. Sridevi
Feature selection or variable reduction is a fundamental problem in data mining, refers to the process of identifying the few most important features for application of a learning algorithm. The best subset contains the minimum number of dimensions retaining a suitably high accuracy on classifier in representing the original features. The objective of the proposed approach is to reduce the number of input features thus to identify the key features and eliminating irrelevant features with no predictive information using clustering technique, K-nearest neighbors (KNN) and rough set. This paper deals with two partition based clustering algorithm in data mining namely K-Means and Fuzzy C Means (FCM). These two algorithms are implemented for original data set without considering the class labels and further rough set theory implemented on the partitioned data set to generate feature subset after removing the outlier by using KNN. Wisconsin Breast Cancer datasets derived from UCI machine learning database are used for the purpose of testing the proposed hybrid method. The results show that the hybrid method is able to produce more accurate diagnosis and prognosis results than the full input model with respect to the classification accuracy.
特征选择或变量约简是数据挖掘中的一个基本问题,是指识别几个最重要的特征以应用学习算法的过程。最佳子集包含最小维数,在表示原始特征时保持分类器的适当高精度。该方法的目的是利用聚类技术、k近邻(KNN)和粗糙集来减少输入特征的数量,从而识别关键特征并消除没有预测信息的不相关特征。本文讨论了数据挖掘中两种基于分区的聚类算法K-Means和模糊C均值(FCM)。这两种算法都是在原始数据集上不考虑类标号的情况下实现的,在分割后的数据集上进一步实现粗糙集理论,利用KNN去除离群值后生成特征子集。使用来自UCI机器学习数据库的威斯康星乳腺癌数据集来测试所提出的混合方法。结果表明,在分类精度方面,混合方法比全输入模型能够产生更准确的诊断和预测结果。
{"title":"An enhanced feature selection method comprising rough set and clustering techniques","authors":"A. Murugan, T. Sridevi","doi":"10.1109/ICCIC.2014.7238376","DOIUrl":"https://doi.org/10.1109/ICCIC.2014.7238376","url":null,"abstract":"Feature selection or variable reduction is a fundamental problem in data mining, refers to the process of identifying the few most important features for application of a learning algorithm. The best subset contains the minimum number of dimensions retaining a suitably high accuracy on classifier in representing the original features. The objective of the proposed approach is to reduce the number of input features thus to identify the key features and eliminating irrelevant features with no predictive information using clustering technique, K-nearest neighbors (KNN) and rough set. This paper deals with two partition based clustering algorithm in data mining namely K-Means and Fuzzy C Means (FCM). These two algorithms are implemented for original data set without considering the class labels and further rough set theory implemented on the partitioned data set to generate feature subset after removing the outlier by using KNN. Wisconsin Breast Cancer datasets derived from UCI machine learning database are used for the purpose of testing the proposed hybrid method. The results show that the hybrid method is able to produce more accurate diagnosis and prognosis results than the full input model with respect to the classification accuracy.","PeriodicalId":187874,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Computing Research","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132404576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Classification of respiratory pathology in pulmonary acoustic signals using parametric features and artificial neural network 基于参数特征和人工神经网络的肺声信号呼吸病理分类
R. Palaniappan, K. Sundaraj, Sebastian Sundaraj, N. Huliraj, S. S. Revadi, B. Archana
Pulmonary acoustic signal analysis provides essential information on the present state of the Lungs. In this paper, we intend to distinguish between normal, airway obstruction pathology and interstitial lung disease using pulmonary acoustic signal recordings. The proposed method extracts Mel frequency cepstral coefficients (MFCC) and AR Coefficients as features from pulmonary acoustic signals. The extracted features are then classified using Artificial Neural Network (ANN) classifier. The classifier performance is analysed by using confusion matrix technique. A mean classification accuracy of 92.59% and 91.69% was reported for the MFCC features and AR coefficients features respectively. The performance analysis of the ANN classifier using confusion matrix revealed that normal, airway obstruction and interstitial lung disease are classified at 92.75%, 91.30% and 92.75% classification accuracy respectively for the MFCC features. Similarly, normal, airway obstruction and interstitial lung disease are classified at 92.75%, 91.30% and 89.85% classification accuracy respectively for the AR coefficient features. The analysis reveals that the proposed method shows promising outcome in distinguishing between the normal, airway obstruction and interstitial lung disease.
肺声信号分析提供了肺当前状态的基本信息。在本文中,我们打算通过肺声信号记录来区分正常的气道阻塞病理和间质性肺疾病。该方法从肺声信号中提取Mel频率倒谱系数(MFCC)和AR系数作为特征。然后使用人工神经网络(ANN)分类器对提取的特征进行分类。利用混淆矩阵技术对分类器性能进行了分析。MFCC特征和AR系数特征的平均分类准确率分别为92.59%和91.69%。利用混淆矩阵对神经网络分类器进行性能分析,发现MFCC特征的分类准确率分别为92.75%、91.30%和92.75%,分别为正常、气道阻塞和间质性肺病。同样,正常、气道阻塞和间质性肺疾病的AR系数特征分类准确率分别为92.75%、91.30%和89.85%。分析表明,该方法在区分正常、气道阻塞和间质性肺疾病方面显示出良好的结果。
{"title":"Classification of respiratory pathology in pulmonary acoustic signals using parametric features and artificial neural network","authors":"R. Palaniappan, K. Sundaraj, Sebastian Sundaraj, N. Huliraj, S. S. Revadi, B. Archana","doi":"10.1109/ICCIC.2014.7238315","DOIUrl":"https://doi.org/10.1109/ICCIC.2014.7238315","url":null,"abstract":"Pulmonary acoustic signal analysis provides essential information on the present state of the Lungs. In this paper, we intend to distinguish between normal, airway obstruction pathology and interstitial lung disease using pulmonary acoustic signal recordings. The proposed method extracts Mel frequency cepstral coefficients (MFCC) and AR Coefficients as features from pulmonary acoustic signals. The extracted features are then classified using Artificial Neural Network (ANN) classifier. The classifier performance is analysed by using confusion matrix technique. A mean classification accuracy of 92.59% and 91.69% was reported for the MFCC features and AR coefficients features respectively. The performance analysis of the ANN classifier using confusion matrix revealed that normal, airway obstruction and interstitial lung disease are classified at 92.75%, 91.30% and 92.75% classification accuracy respectively for the MFCC features. Similarly, normal, airway obstruction and interstitial lung disease are classified at 92.75%, 91.30% and 89.85% classification accuracy respectively for the AR coefficient features. The analysis reveals that the proposed method shows promising outcome in distinguishing between the normal, airway obstruction and interstitial lung disease.","PeriodicalId":187874,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Computing Research","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130139548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2014 IEEE International Conference on Computational Intelligence and Computing Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1