首页 > 最新文献

2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)最新文献

英文 中文
Sparse representation super-resolution method for enhancement analysis in video forensics 视频取证中增强分析的稀疏表示超分辨率方法
N. Zamani, A. D. M. Zahamdin, S. Abdullah, M. J. Nordin
The enhancement analysis in video forensics is used to enhance the clarity of video frames of a video exhibit. The enhanced version of these video frames is important as to assist law enforcement agency for investigation or to be tended as evidence in court. The most significant problem observed in the analysis is the enhancement of objects under probe in video. In many cases, the probes appeared to be in low-resolution and degraded with noise, lens blur and compression artifacts. The enhancement of these low quality probes via conventional method of denoising and resizing has proven to further degrade the quality of the prober The objective of this paper is to propose an enhancement analysis algorithm based on super-resolution. Hence, we present an solution which is a single-frame solution for super-resolution. For that purpose, our proposed method incorporates sparse coding with Non-Negative Matrix Factorization in order to improve hallucination of probes in video. Sparse coding is employed in learning a localized part-based subspace which synthesizes higher resolution with respect to overcomplete patch dictionaries. We test our proposed method and compare with state-of-the-art methods namely resampling and super-resolution method, by enhancing probes in exhibit videos. We measure the image quality using peak-signal-to-noise-ratio. The result shows that our proposed method outperforms state-of the-art methods after enhancing probes in exhibit videos.
视频取证中的增强分析是用来增强视频证物视频帧的清晰度。这些视频帧的增强版本对于协助执法机构进行调查或在法庭上作为证据非常重要。在分析中发现的最重要的问题是视频中被探测物体的增强。在许多情况下,探头似乎是在低分辨率和退化的噪音,镜头模糊和压缩伪影。通过传统的去噪和调整尺寸的方法来增强这些低质量的探针,已经被证明会进一步降低探针的质量。本文的目标是提出一种基于超分辨率的增强分析算法。因此,我们提出了一种超分辨率的单帧解决方案。为此,我们提出的方法将稀疏编码与非负矩阵分解相结合,以改善视频中探针的幻觉。采用稀疏编码学习基于局部零件的子空间,该子空间相对于过完备的补丁字典合成了更高的分辨率。我们测试了我们提出的方法,并通过增强展览视频中的探针与最先进的方法即重采样和超分辨率方法进行了比较。我们使用峰值信噪比来测量图像质量。结果表明,在增强了展览视频中的探针后,我们提出的方法优于目前最先进的方法。
{"title":"Sparse representation super-resolution method for enhancement analysis in video forensics","authors":"N. Zamani, A. D. M. Zahamdin, S. Abdullah, M. J. Nordin","doi":"10.1109/ISDA.2012.6416661","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416661","url":null,"abstract":"The enhancement analysis in video forensics is used to enhance the clarity of video frames of a video exhibit. The enhanced version of these video frames is important as to assist law enforcement agency for investigation or to be tended as evidence in court. The most significant problem observed in the analysis is the enhancement of objects under probe in video. In many cases, the probes appeared to be in low-resolution and degraded with noise, lens blur and compression artifacts. The enhancement of these low quality probes via conventional method of denoising and resizing has proven to further degrade the quality of the prober The objective of this paper is to propose an enhancement analysis algorithm based on super-resolution. Hence, we present an solution which is a single-frame solution for super-resolution. For that purpose, our proposed method incorporates sparse coding with Non-Negative Matrix Factorization in order to improve hallucination of probes in video. Sparse coding is employed in learning a localized part-based subspace which synthesizes higher resolution with respect to overcomplete patch dictionaries. We test our proposed method and compare with state-of-the-art methods namely resampling and super-resolution method, by enhancing probes in exhibit videos. We measure the image quality using peak-signal-to-noise-ratio. The result shows that our proposed method outperforms state-of the-art methods after enhancing probes in exhibit videos.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132632547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Face recognition system invariant to plastic surgery 人脸识别系统不受整形手术的影响
N. Lakshmiprabha, S. Majumder
Facial plastic surgery changes facial features to large extend and thus creating a major problem to face recognition system. This paper proposes a new face recognition system using novel shape local binary texture (SLBT) feature from face images cascaded with periocular feature for plastic surgery invariant face recognition. In-spite of many uniqueness and advantages, the existing feature extraction methods are capable of extracting either shape or texture feature. A method which can extract both shape and texture feature is more attractive. The proposed SLBT can extract global shape, local shape and texture information from a face image by extracting local binary pattern (LBP) instead of direct intensity values from shape free patch of active appearance model (AAM). The experiments conducted using MUCT and plastic surgery face database shows that the SLBT feature performs better than AAM and LBP features. Further increase in recognition rate is achieved by cascading SLBT features from face with LBP features from periocular regions. The result from surgical and non-surgical face database shows that the proposed face recognition system can easily tackle illumination, pose, expression, occlusion and plastic surgery variations in face images.
面部整形手术极大地改变了面部特征,从而给人脸识别系统带来了重大问题。本文提出了一种利用人脸图像的形状局部二值纹理(SLBT)特征与眼周特征级联的人脸识别系统,用于整形手术不变人脸识别。尽管现有的特征提取方法有许多独特之处和优点,但它们只能提取形状特征或纹理特征。一种同时提取形状和纹理特征的方法更具吸引力。该算法通过提取局部二值模式(local binary pattern, LBP)来代替主动外观模型(AAM)的直接强度值,从人脸图像中提取全局形状、局部形状和纹理信息。使用MUCT和整形外科面部数据库进行的实验表明,SLBT特征的性能优于AAM和LBP特征。通过将人脸的SLBT特征与眼周区域的LBP特征级联,进一步提高了识别率。手术和非手术面部数据库的结果表明,所提出的人脸识别系统可以很容易地处理面部图像中的照明、姿势、表情、遮挡和整形手术变化。
{"title":"Face recognition system invariant to plastic surgery","authors":"N. Lakshmiprabha, S. Majumder","doi":"10.1109/ISDA.2012.6416547","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416547","url":null,"abstract":"Facial plastic surgery changes facial features to large extend and thus creating a major problem to face recognition system. This paper proposes a new face recognition system using novel shape local binary texture (SLBT) feature from face images cascaded with periocular feature for plastic surgery invariant face recognition. In-spite of many uniqueness and advantages, the existing feature extraction methods are capable of extracting either shape or texture feature. A method which can extract both shape and texture feature is more attractive. The proposed SLBT can extract global shape, local shape and texture information from a face image by extracting local binary pattern (LBP) instead of direct intensity values from shape free patch of active appearance model (AAM). The experiments conducted using MUCT and plastic surgery face database shows that the SLBT feature performs better than AAM and LBP features. Further increase in recognition rate is achieved by cascading SLBT features from face with LBP features from periocular regions. The result from surgical and non-surgical face database shows that the proposed face recognition system can easily tackle illumination, pose, expression, occlusion and plastic surgery variations in face images.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125301814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Prediction of risk score for heart disease using associative classification and hybrid feature subset selection 使用关联分类和混合特征子集选择预测心脏病风险评分
M. Jabbar, P. Chandra, B. Deekshatulu
Medical data mining is the search for relationships and patterns within the medical data that could provide useful knowledge for effective medical diagnosis. Extracting useful information from these data bases can lead to discovery of rules for later diagnosis tools. Generally medical data bases are highly voluminous in nature. If a training data set contains irrelevant and redundant features classification may produce less accurate results. Feature selection as a pre-processing step in used to reduce dimensionality, removing irrelevant data and increasing accuracy and improves comprehensibility. Associative classification is a recent and rewarding technique that applies the methodology of association into classification and achieves high classification accuracy. Most associative classification algorithms adopt exhaustive search algorithms like in Apriori, and generate huge no. of rules from which a set of high quality of rules are chosen to construct efficient classifier. Hence generating a small set of high quality rules to build classifier is a challenging task. Cardiovascular diseases are the leading cause of death globally and in India more deaths are due to CHD. Cardiovascular disease is an increasingly an important cause of death in Andhra Pradesh. Hence there is an urgent need to develop a system to predict the heart disease of people. This paper discusses prediction of risk score for heart disease in Andhra Pradesh. We generated class association rules using feature subset selection. These generated rules will help physicians to predict the heart disease of a patient.
医疗数据挖掘是在医疗数据中搜索可以为有效的医疗诊断提供有用知识的关系和模式。从这些数据库中提取有用的信息可以为以后的诊断工具发现规则。一般来说,医学数据库本质上是非常庞大的。如果训练数据集包含不相关和冗余的特征,分类可能产生不太准确的结果。特征选择作为预处理步骤,用于降维,去除不相关数据,提高准确性和可理解性。关联分类是一种将关联方法应用到分类中并获得较高分类准确率的新兴分类技术。大多数关联分类算法采用像Apriori这样的穷举搜索算法,产生大量的no。从规则中选择一组高质量的规则来构建高效的分类器。因此,生成少量的高质量规则集来构建分类器是一项具有挑战性的任务。心血管疾病是全球死亡的主要原因,在印度,冠心病导致的死亡人数更多。在安得拉邦,心血管疾病日益成为一个重要的死亡原因。因此,迫切需要开发一种预测人类心脏病的系统。本文讨论了安得拉邦心脏病风险评分的预测。我们使用特征子集选择生成类关联规则。这些生成的规则将帮助医生预测病人的心脏病。
{"title":"Prediction of risk score for heart disease using associative classification and hybrid feature subset selection","authors":"M. Jabbar, P. Chandra, B. Deekshatulu","doi":"10.1109/ISDA.2012.6416610","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416610","url":null,"abstract":"Medical data mining is the search for relationships and patterns within the medical data that could provide useful knowledge for effective medical diagnosis. Extracting useful information from these data bases can lead to discovery of rules for later diagnosis tools. Generally medical data bases are highly voluminous in nature. If a training data set contains irrelevant and redundant features classification may produce less accurate results. Feature selection as a pre-processing step in used to reduce dimensionality, removing irrelevant data and increasing accuracy and improves comprehensibility. Associative classification is a recent and rewarding technique that applies the methodology of association into classification and achieves high classification accuracy. Most associative classification algorithms adopt exhaustive search algorithms like in Apriori, and generate huge no. of rules from which a set of high quality of rules are chosen to construct efficient classifier. Hence generating a small set of high quality rules to build classifier is a challenging task. Cardiovascular diseases are the leading cause of death globally and in India more deaths are due to CHD. Cardiovascular disease is an increasingly an important cause of death in Andhra Pradesh. Hence there is an urgent need to develop a system to predict the heart disease of people. This paper discusses prediction of risk score for heart disease in Andhra Pradesh. We generated class association rules using feature subset selection. These generated rules will help physicians to predict the heart disease of a patient.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":" 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113947504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
Force sensing resistors for monitoring proprioception response in rehabilitation routines 用于监测康复程序本体感觉反应的力感电阻
A. Gopalai, S. M. N. Arosha Senanayake
During rehabilitation routines for postural control, clinician use proprioception training involving wobble boards to help strengthen the proprioception. Wobble board routines are carried out for at least six weeks, subjects are required to perform certain motions on the boards which are targeted to improve proprioception. Subjects perform this tasks without (or with minimal) real-time feedback. A real-time system to monitor proprioception training, using a wobble board, was designed and tested. This work presents a force sensing platform, equipped with soft-computing methods to measure effects of destabilizing postural perturbations. Experiments were conducted to verify the system's ability to monitor and gauge subject's postural control via proprioception. The experimental set-up was observed at a frequency of three times a week for a duration of six weeks. Fuzzy clustering and area of sway analysis was used to determine the effects of training on subjects' postural control in Eyes Open (EO) and Eyes Close (EC) conditions. All data was tabulated and compared using one-way ANOVA to determine its statistical significance, with a false rejection ratio α = 0.05. The results of the experiment supported the suitability of the system for clinical applications pertaining to postural control improvements.
在姿势控制的康复过程中,临床医生使用包括摆动板在内的本体感觉训练来帮助增强本体感觉。摇摆板的日常活动至少进行六周,受试者需要在板上进行特定的运动,以改善本体感觉。受试者在没有(或只有很少)实时反馈的情况下执行这些任务。设计并测试了一个利用摆动板监测本体感觉训练的实时系统。这项工作提出了一个力传感平台,配备了软计算方法来测量不稳定的姿势扰动的影响。实验验证了该系统通过本体感觉监测和测量受试者姿势控制的能力。实验设置以每周三次的频率进行观察,持续时间为六周。采用模糊聚类和摇摆面积分析来确定训练对受试者在睁眼和闭眼条件下的姿势控制的影响。将所有数据制成表格,采用单因素方差分析(one-way ANOVA)比较其统计学显著性,假拒绝率α = 0.05。实验结果支持了该系统在改善姿势控制方面的临床应用的适用性。
{"title":"Force sensing resistors for monitoring proprioception response in rehabilitation routines","authors":"A. Gopalai, S. M. N. Arosha Senanayake","doi":"10.1109/ISDA.2012.6416665","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416665","url":null,"abstract":"During rehabilitation routines for postural control, clinician use proprioception training involving wobble boards to help strengthen the proprioception. Wobble board routines are carried out for at least six weeks, subjects are required to perform certain motions on the boards which are targeted to improve proprioception. Subjects perform this tasks without (or with minimal) real-time feedback. A real-time system to monitor proprioception training, using a wobble board, was designed and tested. This work presents a force sensing platform, equipped with soft-computing methods to measure effects of destabilizing postural perturbations. Experiments were conducted to verify the system's ability to monitor and gauge subject's postural control via proprioception. The experimental set-up was observed at a frequency of three times a week for a duration of six weeks. Fuzzy clustering and area of sway analysis was used to determine the effects of training on subjects' postural control in Eyes Open (EO) and Eyes Close (EC) conditions. All data was tabulated and compared using one-way ANOVA to determine its statistical significance, with a false rejection ratio α = 0.05. The results of the experiment supported the suitability of the system for clinical applications pertaining to postural control improvements.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121627047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A novel Block Matching Algorithmic Approach with smaller block size for motion vector estimation in video compression 一种新的小块匹配算法用于视频压缩中的运动矢量估计
S. Acharjee, N. Dey, D. Biswas, P. Das, S. S. Chaudhuri
The most computationally expensive operation in entire video compression process is Motion Estimation. The challenge is to reduce the computational complexity and time of Exhaustive Search Algorithm without losing too much quality at the output. The proposed work is to implement a novel block matching algorithm for Motion Vector Estimation which performs better than other conventional Block Matching Algorithms such as Three Step Search (TSS), New Three Step Search (NTSS), and Four Step Search (FSS) etc.
在整个视频压缩过程中,计算开销最大的操作是运动估计。如何降低穷举搜索算法的计算复杂度和时间,同时又不影响输出的质量,是当前搜索算法面临的挑战。提出了一种新的运动矢量估计块匹配算法,其性能优于传统的块匹配算法,如三步搜索(TSS)、新三步搜索(NTSS)和四步搜索(FSS)等。
{"title":"A novel Block Matching Algorithmic Approach with smaller block size for motion vector estimation in video compression","authors":"S. Acharjee, N. Dey, D. Biswas, P. Das, S. S. Chaudhuri","doi":"10.1109/ISDA.2012.6416617","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416617","url":null,"abstract":"The most computationally expensive operation in entire video compression process is Motion Estimation. The challenge is to reduce the computational complexity and time of Exhaustive Search Algorithm without losing too much quality at the output. The proposed work is to implement a novel block matching algorithm for Motion Vector Estimation which performs better than other conventional Block Matching Algorithms such as Three Step Search (TSS), New Three Step Search (NTSS), and Four Step Search (FSS) etc.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122093892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Fuzzy cluster descriptors improve flexible organization of documents 模糊聚类描述符提高了文档组织的灵活性
T. Nogueira, S. O. Rezende, H. Camargo
System flexibility means the ability of a system to manage imprecise and/or uncertain information. There are two ways to address the Information Retrieval Systems (IRS) flexibility: through methods that improve the query formulation and through methods that improve the document organization. Since the query formulation has obtained more attention in retrieval process, we aim to investigate the flexibility in document organization. When a document organization is carried out using fuzzy clustering, the documents can belong to more than one cluster simultaneously with different membership degrees, allowing the management of imprecise and/or uncertain information in the collection organization. Clusters represent topics and are identified by one or more descriptors. In this work we use an unsupervised method to extract cluster descriptors for a specific database and investigate whether the quality of the fuzzy cluster descriptors improves the flexible organization of documents.
系统灵活性是指系统管理不精确和/或不确定信息的能力。有两种方法可以解决信息检索系统(Information Retrieval Systems, IRS)的灵活性问题:通过改进查询公式的方法和改进文档组织的方法。由于查询公式在检索过程中受到越来越多的关注,我们的目标是研究文档组织的灵活性。当使用模糊聚类进行文档组织时,文档可以同时属于不同隶属度的多个聚类,从而可以管理集合组织中的不精确和/或不确定信息。集群表示主题,并由一个或多个描述符标识。在这项工作中,我们使用一种无监督的方法提取特定数据库的聚类描述符,并研究模糊聚类描述符的质量是否提高了文档的灵活组织。
{"title":"Fuzzy cluster descriptors improve flexible organization of documents","authors":"T. Nogueira, S. O. Rezende, H. Camargo","doi":"10.1109/ISDA.2012.6416608","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416608","url":null,"abstract":"System flexibility means the ability of a system to manage imprecise and/or uncertain information. There are two ways to address the Information Retrieval Systems (IRS) flexibility: through methods that improve the query formulation and through methods that improve the document organization. Since the query formulation has obtained more attention in retrieval process, we aim to investigate the flexibility in document organization. When a document organization is carried out using fuzzy clustering, the documents can belong to more than one cluster simultaneously with different membership degrees, allowing the management of imprecise and/or uncertain information in the collection organization. Clusters represent topics and are identified by one or more descriptors. In this work we use an unsupervised method to extract cluster descriptors for a specific database and investigate whether the quality of the fuzzy cluster descriptors improves the flexible organization of documents.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116620619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A local complexity based combination method for decision forests trained with high-dimensional data 基于局部复杂度的高维数据训练决策林组合方法
Yoisel Campos, Carlos Morell, F. Ferri
Accurate machine learning with high-dimensional data is affected by phenomena known as the “curse” of dimensionality. One of the main strategies explored in the last decade to deal with this problem is the use of multi-classifier systems. Several of such approaches are inspired by the Random Subspace Method for the construction of decision forests. Furthermore, other studies rely on estimations of the individual classifiers' competence, to enhance the combination in the multi-classifier and improve the accuracy. We propose a competence estimate which is based on local complexity measurements, to perform a weighted average combination of the decision forest. Experimental results show how this idea significantly outperforms the standard non-weighted average combination and also the renowned Classifier Local Accuracy competence estimate, while consuming significantly less time.
使用高维数据的精确机器学习受到维数“诅咒”现象的影响。在过去十年中探索的主要策略之一是使用多分类器系统来处理这个问题。其中一些方法是受随机子空间方法的启发,用于构造决策森林。此外,还有一些研究依赖于对单个分类器能力的估计,以增强多分类器的组合,提高准确率。我们提出了一种基于局部复杂性度量的能力估计方法,对决策林进行加权平均组合。实验结果表明,该方法显著优于标准的非加权平均组合和著名的分类器局部精度能力估计,同时消耗的时间显著减少。
{"title":"A local complexity based combination method for decision forests trained with high-dimensional data","authors":"Yoisel Campos, Carlos Morell, F. Ferri","doi":"10.1109/ISDA.2012.6416536","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416536","url":null,"abstract":"Accurate machine learning with high-dimensional data is affected by phenomena known as the “curse” of dimensionality. One of the main strategies explored in the last decade to deal with this problem is the use of multi-classifier systems. Several of such approaches are inspired by the Random Subspace Method for the construction of decision forests. Furthermore, other studies rely on estimations of the individual classifiers' competence, to enhance the combination in the multi-classifier and improve the accuracy. We propose a competence estimate which is based on local complexity measurements, to perform a weighted average combination of the decision forest. Experimental results show how this idea significantly outperforms the standard non-weighted average combination and also the renowned Classifier Local Accuracy competence estimate, while consuming significantly less time.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116907038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Using Self-Organizing Maps in constrained ensemble clustering framework 约束集成聚类框架中的自组织映射
R. Visakh
Clustering is a predominant data mining task which attempts to partition a group of unlabelled data instances into distinct clusters. The clusters so obtained will have maximum intra-cluster similarity and minimum inter-cluster similarity. Several clustering techniques have been proposed in literature, which includes stand-alone as well as ensemble clustering techniques. Most of them lack robustness and suffer from an important drawback that they cannot effectively visualize clustering results to help knowledge discovery and constructive learning. Recently, clustering techniques via visualization of data have been proposed. These rely on building a Self Organizing Map (SOM) originally proposed by Kohonen. Even though Kohonen SOM preserves topology of the input data, it is widely observed that the clustering accuracy achieved by SOM is poor. To perform robust and accurate clustering using SOM, a cluster ensemble framework based on input constraints is proposed in this paper. Cluster ensemble is a set of clustering solutions obtained as a result of individual clustering on subsets of the original high-dimensional data. The final consensus matrix is fed to a neural network which transforms the input data to a lower-dimensional output map. The map clearly depicts the distribution of input data instances into clusters.
聚类是一种主要的数据挖掘任务,它试图将一组未标记的数据实例划分为不同的集群。这样得到的聚类具有最大的簇内相似度和最小的簇间相似度。文献中提出了几种聚类技术,包括独立聚类技术和集成聚类技术。它们大多缺乏鲁棒性,并且存在一个重要的缺点,即它们不能有效地将聚类结果可视化,以帮助知识发现和建设性学习。近年来,人们提出了基于数据可视化的聚类技术。这些依赖于Kohonen最初提出的构建自组织地图(SOM)。尽管Kohonen SOM保留了输入数据的拓扑结构,但广泛观察到SOM的聚类精度较差。为了使用SOM实现鲁棒性和准确性的聚类,本文提出了一种基于输入约束的聚类集成框架。聚类集成是对原始高维数据的子集进行单独聚类而得到的一组聚类解。最终的共识矩阵被送入神经网络,神经网络将输入数据转换为低维输出映射。该映射清楚地描述了输入数据实例在集群中的分布。
{"title":"Using Self-Organizing Maps in constrained ensemble clustering framework","authors":"R. Visakh","doi":"10.1109/ISDA.2012.6416541","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416541","url":null,"abstract":"Clustering is a predominant data mining task which attempts to partition a group of unlabelled data instances into distinct clusters. The clusters so obtained will have maximum intra-cluster similarity and minimum inter-cluster similarity. Several clustering techniques have been proposed in literature, which includes stand-alone as well as ensemble clustering techniques. Most of them lack robustness and suffer from an important drawback that they cannot effectively visualize clustering results to help knowledge discovery and constructive learning. Recently, clustering techniques via visualization of data have been proposed. These rely on building a Self Organizing Map (SOM) originally proposed by Kohonen. Even though Kohonen SOM preserves topology of the input data, it is widely observed that the clustering accuracy achieved by SOM is poor. To perform robust and accurate clustering using SOM, a cluster ensemble framework based on input constraints is proposed in this paper. Cluster ensemble is a set of clustering solutions obtained as a result of individual clustering on subsets of the original high-dimensional data. The final consensus matrix is fed to a neural network which transforms the input data to a lower-dimensional output map. The map clearly depicts the distribution of input data instances into clusters.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117148295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Performance analysis on three dimensional surface reconstruction of head magnetic resonance images 头部磁共振图像三维表面重建的性能分析
R. Preetha, G. Suresh
In MRI images, the boundary of an encephalic tissue is highly curved and irregular. Three dimensional reconstruction of such encephalic tissue is complicated. The surface reconstruction is the sub-field of Medical imaging which provides an effective way to investigate and determine brain related diseases in an efficient and effective manner. The basic purpose of 3-D surface reconstruction is to analyze the brain images precisely in order to effectively diagnose and examine the diseases for surgical planning and tumor localization. Reconstruction of tumor images is the goal in dealing with these images. In this paper, a brief overview is given on the advantages and disadvantages of existing surface reconstruction methods in clinical applications. The traditional cube based algorithms extracts the surface by forming imaginary cube and then determines the polygons needed to represent the part of the isosurface that passes through this cube. But, it requires post processing and needs more Computational time for reconstruction. Also they cannot provide the proof of correctness. The vector machine based algorithms like Immune Sphere Shaped Support Vector Machine (ISSSVM) transforms the highly irregular object into the high dimensional feature space and construct the hyper-sphere as compact as possible which encloses almost all the target object. This paper concludes that ISSSVM can outperform the cube based algorithm by reconstructing the irregular boundaries of the encephalic tissue efficiently without post processing. It can also provide its proof of correctness with greater accuracy.
在MRI图像中,脑组织的边界是高度弯曲和不规则的。这种脑组织的三维重建是复杂的。表面重建是医学影像学的一个分支领域,它为研究和诊断脑相关疾病提供了一种有效的方法。三维表面重建的基本目的是精确分析脑图像,以便有效地诊断和检查疾病,为手术计划和肿瘤定位提供依据。肿瘤图像的重建是处理这些图像的目的。本文就现有表面重建方法在临床应用中的优缺点作一简要综述。传统的基于立方体的算法通过形成虚拟立方体来提取表面,然后确定需要的多边形来表示穿过该立方体的等边面的部分。但是,它需要后期处理,并且需要更多的计算时间来重建。他们也不能提供正确性的证明。免疫球形支持向量机(ISSSVM)等基于向量机的算法将高度不规则的对象转换成高维特征空间,构造出尽可能紧凑的超球,几乎包住了所有目标对象。结果表明,该方法无需后处理就能有效地重建脑组织的不规则边界,优于基于立方体的算法。它还可以更准确地提供其正确性的证明。
{"title":"Performance analysis on three dimensional surface reconstruction of head magnetic resonance images","authors":"R. Preetha, G. Suresh","doi":"10.1109/ISDA.2012.6416563","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416563","url":null,"abstract":"In MRI images, the boundary of an encephalic tissue is highly curved and irregular. Three dimensional reconstruction of such encephalic tissue is complicated. The surface reconstruction is the sub-field of Medical imaging which provides an effective way to investigate and determine brain related diseases in an efficient and effective manner. The basic purpose of 3-D surface reconstruction is to analyze the brain images precisely in order to effectively diagnose and examine the diseases for surgical planning and tumor localization. Reconstruction of tumor images is the goal in dealing with these images. In this paper, a brief overview is given on the advantages and disadvantages of existing surface reconstruction methods in clinical applications. The traditional cube based algorithms extracts the surface by forming imaginary cube and then determines the polygons needed to represent the part of the isosurface that passes through this cube. But, it requires post processing and needs more Computational time for reconstruction. Also they cannot provide the proof of correctness. The vector machine based algorithms like Immune Sphere Shaped Support Vector Machine (ISSSVM) transforms the highly irregular object into the high dimensional feature space and construct the hyper-sphere as compact as possible which encloses almost all the target object. This paper concludes that ISSSVM can outperform the cube based algorithm by reconstructing the irregular boundaries of the encephalic tissue efficiently without post processing. It can also provide its proof of correctness with greater accuracy.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129419686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Entropy based bug prediction using support vector regression 基于熵的支持向量回归bug预测
V. B. Singh, K. K. Chaturvedi
Predicting software defects is one of the key areas of research in software engineering. Researchers have devised and implemented a plethora of defect/bug prediction approaches namely code churn, past bugs, refactoring, number of authors, file size and age, etc by measuring the performance in terms of accuracy and complexity. Different mathematical models have also been developed in the literature to monitor the bug occurrence and fixing process. These existing mathematical models named software reliability growth models are either calendar time or testing effort dependent. The occurrence of bugs in the software is mainly due to the continuous changes in the software code. The continuous changes in the software code make the code complex. The complexity of the code changes have already been quantified in terms of entropy as follows in Hassan [9]. In the available literature, few authors have proposed entropy based bug prediction using conventional simple linear regression (SLR) method. In this paper, we have proposed an entropy based bug prediction approach using support vector regression (SVR). We have compared the results of proposed models with the existing one in the literature and have found that the proposed models are good bug predictor as they have shown the significant improvement in their performance.
软件缺陷预测是软件工程研究的关键领域之一。研究人员已经设计并实现了大量的缺陷/bug预测方法,即代码流失、过去的bug、重构、作者数量、文件大小和使用时间等,方法是根据准确性和复杂性来衡量性能。文献中还开发了不同的数学模型来监控错误的发生和修复过程。这些被称为软件可靠性增长模型的现有数学模型要么依赖于日历时间,要么依赖于测试工作量。软件中出现bug主要是由于软件代码的不断变化。软件代码的不断变化使代码变得复杂。Hassan[9]已经用熵的形式量化了代码变化的复杂性。在现有文献中,很少有作者利用传统的简单线性回归(SLR)方法提出基于熵的bug预测。本文提出了一种基于熵的基于支持向量回归(SVR)的bug预测方法。我们将所提出的模型的结果与文献中已有的模型进行了比较,发现所提出的模型是很好的bug预测器,因为它们在性能上显示出了显着的改进。
{"title":"Entropy based bug prediction using support vector regression","authors":"V. B. Singh, K. K. Chaturvedi","doi":"10.1109/ISDA.2012.6416630","DOIUrl":"https://doi.org/10.1109/ISDA.2012.6416630","url":null,"abstract":"Predicting software defects is one of the key areas of research in software engineering. Researchers have devised and implemented a plethora of defect/bug prediction approaches namely code churn, past bugs, refactoring, number of authors, file size and age, etc by measuring the performance in terms of accuracy and complexity. Different mathematical models have also been developed in the literature to monitor the bug occurrence and fixing process. These existing mathematical models named software reliability growth models are either calendar time or testing effort dependent. The occurrence of bugs in the software is mainly due to the continuous changes in the software code. The continuous changes in the software code make the code complex. The complexity of the code changes have already been quantified in terms of entropy as follows in Hassan [9]. In the available literature, few authors have proposed entropy based bug prediction using conventional simple linear regression (SLR) method. In this paper, we have proposed an entropy based bug prediction approach using support vector regression (SVR). We have compared the results of proposed models with the existing one in the literature and have found that the proposed models are good bug predictor as they have shown the significant improvement in their performance.","PeriodicalId":370150,"journal":{"name":"2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125990788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
期刊
2012 12th International Conference on Intelligent Systems Design and Applications (ISDA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1