首页 > 最新文献

2014 14th UK Workshop on Computational Intelligence (UKCI)最新文献

英文 中文
Automatic image annotation with long distance spatial-context 远距离空间上下文自动图像标注
Pub Date : 2014-10-20 DOI: 10.1109/UKCI.2014.6930181
Donglin Cao, Dazhen Lin, Jiansong Yu
Because of high computational complexity, a long distance spatial-context based automatic image annotation is hard to achieve. Some state of art approaches in image processing, such as 2D-HMM, only considering short distance spatial-context (two neighbors) to reduce the computational complexity. However, these approaches cannot describe long distance semantic spatial-context in image. Therefore, in this paper, we propose a two-step Long Distance Spatial-context Model (LDSM) to solve that problem. First, because of high computational complexity in 2D spatial-context, we transform a 2D spatial-context into a 1D sequence-context. Second, we use conditional random fields to model the 1D sequence-context. Our experiments show that LDSM models the semantic relation between annotated object and background, and experiment results outperform the classical automatic image annotation approach (SVM).
由于计算复杂度高,基于空间上下文的远距离图像自动标注难以实现。在图像处理中,一些最先进的方法,如2D-HMM,只考虑短距离的空间-上下文(两个相邻)来降低计算复杂度。然而,这些方法无法描述图像中的长距离语义空间语境。为此,本文提出了一种两步长距离空间-背景模型(LDSM)来解决这一问题。首先,由于二维空间上下文的计算复杂度高,我们将二维空间上下文转换为一维序列上下文。其次,我们使用条件随机场来建模一维序列上下文。实验结果表明,LDSM能够对标注对象和背景之间的语义关系进行建模,实验结果优于经典的自动图像标注方法(SVM)。
{"title":"Automatic image annotation with long distance spatial-context","authors":"Donglin Cao, Dazhen Lin, Jiansong Yu","doi":"10.1109/UKCI.2014.6930181","DOIUrl":"https://doi.org/10.1109/UKCI.2014.6930181","url":null,"abstract":"Because of high computational complexity, a long distance spatial-context based automatic image annotation is hard to achieve. Some state of art approaches in image processing, such as 2D-HMM, only considering short distance spatial-context (two neighbors) to reduce the computational complexity. However, these approaches cannot describe long distance semantic spatial-context in image. Therefore, in this paper, we propose a two-step Long Distance Spatial-context Model (LDSM) to solve that problem. First, because of high computational complexity in 2D spatial-context, we transform a 2D spatial-context into a 1D sequence-context. Second, we use conditional random fields to model the 1D sequence-context. Our experiments show that LDSM models the semantic relation between annotated object and background, and experiment results outperform the classical automatic image annotation approach (SVM).","PeriodicalId":315044,"journal":{"name":"2014 14th UK Workshop on Computational Intelligence (UKCI)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116970809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data density based clustering 基于数据密度的聚类
Pub Date : 2014-10-20 DOI: 10.1109/UKCI.2014.6930157
Richard Hyde, P. Angelov
A new, data density based approach to clustering is presented which automatically determines the number of clusters. By using RDE for each data sample the number of calculations is significantly reduced in offline mode and, further, the method is suitable for online use. The clusters allow a different diameter per feature/dimension creating hyper-ellipsoid clusters which are axis-orthogonal. This results in a greater differentiation between clusters where the clusters are highly asymmetrical. We illustrate this with 3 standard data sets, 1 artificial dataset and a large real dataset to demonstrate comparable results to Subtractive, Hierarchical, K-Means, ELM and DBScan clustering techniques. Unlike subtractive clustering we do not iteratively calculate P however. Unlike hierarchical we do not need O(N2) distances to be calculated and a cut-off threshold to be defined. Unlike k-means we do not need to predefine the number of clusters. Using the RDE equations to calculate the densities the algorithm is efficient, and requires no iteration to approach the optimal result. We compare the proposed algorithm to k-means, subtractive, hierarchical, ELM and DBScan clustering with respect to several criteria. The results demonstrate the validity of the proposed approach.
提出了一种新的基于数据密度的聚类方法,该方法可以自动确定聚类的数量。通过对每个数据样本使用RDE,可以大大减少离线模式下的计算次数,而且该方法适合在线使用。集群允许每个特征/维度的不同直径创建轴正交的超椭球集群。这导致集群之间的差异更大,集群是高度不对称的。我们用3个标准数据集、1个人工数据集和一个大型真实数据集来说明这一点,以展示与减法、分层、K-Means、ELM和DBScan聚类技术的可比较结果。不像减法聚类,我们不迭代地计算P。与分层方法不同,我们不需要计算O(N2)距离,也不需要定义截止阈值。与k-means不同,我们不需要预先定义集群的数量。使用RDE方程计算密度,该算法效率高,无需迭代即可接近最优结果。我们将提出的算法与k-means,减法,分层,ELM和DBScan聚类进行了比较。结果表明了该方法的有效性。
{"title":"Data density based clustering","authors":"Richard Hyde, P. Angelov","doi":"10.1109/UKCI.2014.6930157","DOIUrl":"https://doi.org/10.1109/UKCI.2014.6930157","url":null,"abstract":"A new, data density based approach to clustering is presented which automatically determines the number of clusters. By using RDE for each data sample the number of calculations is significantly reduced in offline mode and, further, the method is suitable for online use. The clusters allow a different diameter per feature/dimension creating hyper-ellipsoid clusters which are axis-orthogonal. This results in a greater differentiation between clusters where the clusters are highly asymmetrical. We illustrate this with 3 standard data sets, 1 artificial dataset and a large real dataset to demonstrate comparable results to Subtractive, Hierarchical, K-Means, ELM and DBScan clustering techniques. Unlike subtractive clustering we do not iteratively calculate P however. Unlike hierarchical we do not need O(N2) distances to be calculated and a cut-off threshold to be defined. Unlike k-means we do not need to predefine the number of clusters. Using the RDE equations to calculate the densities the algorithm is efficient, and requires no iteration to approach the optimal result. We compare the proposed algorithm to k-means, subtractive, hierarchical, ELM and DBScan clustering with respect to several criteria. The results demonstrate the validity of the proposed approach.","PeriodicalId":315044,"journal":{"name":"2014 14th UK Workshop on Computational Intelligence (UKCI)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126192029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Human activity classification using a single accelerometer 使用单个加速度计进行人类活动分类
Pub Date : 2014-10-20 DOI: 10.1109/UKCI.2014.6930189
Hamzah S. AlZu'bi, Simon Gerrard-Longworth, W. Al-Nuaimy, J. Goulermas, S. Preece
Human activity recognition is an area of growing interest facilitated by the current revolution in body-worn sensors. Activity recognition allows applications to construct activity profiles for each subject which could be used effectively for healthcare and safety applications. Automated human activity recognition systems face several challenges such as number of sensors, sensor precision, gait style differences, and others. This work proposes a machine learning system to automatically recognise human activities based on a single body-worn accelerometer. The in-house collected dataset contains 3D acceleration of 50 subjects performing 10 different activities. The dataset was produced to ensure robustness and prevent subject-biased results. The feature vector is derived from simple statistical features. The proposed method benefits from RGB-to-YIQ colour space transform as kernel to transform the feature vector into more discriminable features. The classification technique is based on an adaptive boosting ensemble classifier. The proposed system shows consistent classification performance up to 95% accuracy among the 50 subjects.
人体活动识别是一个日益引起人们兴趣的领域,这得益于当前穿戴式传感器的革命。活动识别允许应用程序为每个主题构建活动配置文件,这些配置文件可以有效地用于医疗保健和安全应用程序。自动化人类活动识别系统面临许多挑战,如传感器数量、传感器精度、步态风格差异等。这项工作提出了一种机器学习系统,可以基于单个穿戴式加速度计自动识别人类活动。内部收集的数据集包含50个受试者执行10种不同活动的3D加速。数据集的产生是为了确保稳健性和防止受试者偏差的结果。特征向量来源于简单的统计特征。该方法利用rgb - yiq颜色空间变换作为核,将特征向量转化为更容易识别的特征。该分类技术基于自适应增强集成分类器。该系统在50个主题中表现出一致性的分类性能,准确率高达95%。
{"title":"Human activity classification using a single accelerometer","authors":"Hamzah S. AlZu'bi, Simon Gerrard-Longworth, W. Al-Nuaimy, J. Goulermas, S. Preece","doi":"10.1109/UKCI.2014.6930189","DOIUrl":"https://doi.org/10.1109/UKCI.2014.6930189","url":null,"abstract":"Human activity recognition is an area of growing interest facilitated by the current revolution in body-worn sensors. Activity recognition allows applications to construct activity profiles for each subject which could be used effectively for healthcare and safety applications. Automated human activity recognition systems face several challenges such as number of sensors, sensor precision, gait style differences, and others. This work proposes a machine learning system to automatically recognise human activities based on a single body-worn accelerometer. The in-house collected dataset contains 3D acceleration of 50 subjects performing 10 different activities. The dataset was produced to ensure robustness and prevent subject-biased results. The feature vector is derived from simple statistical features. The proposed method benefits from RGB-to-YIQ colour space transform as kernel to transform the feature vector into more discriminable features. The classification technique is based on an adaptive boosting ensemble classifier. The proposed system shows consistent classification performance up to 95% accuracy among the 50 subjects.","PeriodicalId":315044,"journal":{"name":"2014 14th UK Workshop on Computational Intelligence (UKCI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123625097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A rule based system for diagnosing and treating chronic heart failure 基于规则的慢性心力衰竭诊断和治疗系统
Pub Date : 2014-10-20 DOI: 10.1109/UKCI.2014.6930178
Luke Vella Critien, Arjab Singh Khuman, Jenny Carter, S. Ahmadi
The aim of this study is to design a rule based expert system which provides doctors with a better tool to be able to manage chronic heart failure in adults. The suggested system is intended to help in diagnosing heart failure at the earliest permissible stage and subsequently suggest the best treatment for the particular case. The designed system has two facets, one related to the diagnosis and the other related to the treatment of chronic heart failure. The former part is based on the latest chronic heart failure guidelines issued by the National Health Services (NHS) - National Institute for Health and Clinical Excellence (NICE) in August 2010. The treatment for chronic heart failure is based on the latest version of British National Formulary (BNF). This rule based system is not intended to replace the specialist but it may be used to provide assurance that all diagnostic criteria have been followed and hence the best possible treatment is given. The system is implemented using the CLIPS language which is a powerful forward-chaining rule based system.
本研究的目的是设计一个基于规则的专家系统,为医生提供一个更好的工具来管理成人慢性心力衰竭。建议的系统旨在帮助在允许的最早阶段诊断心力衰竭,并随后建议针对特定病例的最佳治疗。所设计的系统有两个方面,一个与慢性心力衰竭的诊断有关,另一个与慢性心力衰竭的治疗有关。前一部分是基于国家卫生服务(NHS) -国家健康与临床卓越研究所(NICE)于2010年8月发布的最新慢性心力衰竭指南。慢性心力衰竭的治疗是基于最新版本的英国国家处方(BNF)。这个基于规则的系统并不是为了取代专家,但它可以用来保证所有的诊断标准都得到了遵守,从而得到了最好的治疗。该系统采用功能强大的基于前向链规则的CLIPS语言实现。
{"title":"A rule based system for diagnosing and treating chronic heart failure","authors":"Luke Vella Critien, Arjab Singh Khuman, Jenny Carter, S. Ahmadi","doi":"10.1109/UKCI.2014.6930178","DOIUrl":"https://doi.org/10.1109/UKCI.2014.6930178","url":null,"abstract":"The aim of this study is to design a rule based expert system which provides doctors with a better tool to be able to manage chronic heart failure in adults. The suggested system is intended to help in diagnosing heart failure at the earliest permissible stage and subsequently suggest the best treatment for the particular case. The designed system has two facets, one related to the diagnosis and the other related to the treatment of chronic heart failure. The former part is based on the latest chronic heart failure guidelines issued by the National Health Services (NHS) - National Institute for Health and Clinical Excellence (NICE) in August 2010. The treatment for chronic heart failure is based on the latest version of British National Formulary (BNF). This rule based system is not intended to replace the specialist but it may be used to provide assurance that all diagnostic criteria have been followed and hence the best possible treatment is given. The system is implemented using the CLIPS language which is a powerful forward-chaining rule based system.","PeriodicalId":315044,"journal":{"name":"2014 14th UK Workshop on Computational Intelligence (UKCI)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123444850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Investigating the relationship between the distribution of local semantic concepts and local keypoints for image annotation 研究图像标注中局部语义概念的分布与局部关键点的关系
Pub Date : 2014-10-20 DOI: 10.1109/UKCI.2014.6930165
Yousef Alqasrawi, D. Neagu
The problem of image annotation has gained increasing attention from many researchers in computer vision. Few works have addressed the use of bag of visual words for scene annotation at region level. The aim of this paper is to study the relationship between the distribution of local semantic concepts and local keypoints located in image regions labelled with these semantic concepts. Based on this study, we investigate whether bag of visual words model can be used to efficiently represent the content of natural scene image regions, so images can be annotated with local semantic concepts. Also, this paper presents local from global approach which study the influence of using visual vocabularies generated from general scene categories to build bag of visual words at region level. Extensive experiments are conducted over a natural scene dataset with six categories. The reported results have shown the plausibility of using the BOW model to represent the semantic information of image regions.
图像标注问题越来越受到计算机视觉研究者的关注。很少有研究涉及在区域层面使用视觉词袋进行场景标注。本文的目的是研究局部语义概念的分布与局部关键点之间的关系,这些局部关键点位于用这些语义概念标记的图像区域中。在此基础上,我们研究了视觉词袋模型能否有效地表示自然场景图像区域的内容,从而对图像进行局部语义概念标注。此外,本文还提出了从全局到局部的方法,研究了使用一般场景类别生成的视觉词汇在区域层面构建视觉词汇包的影响。在包含6个类别的自然场景数据集上进行了广泛的实验。研究结果表明,使用BOW模型来表示图像区域的语义信息是可行的。
{"title":"Investigating the relationship between the distribution of local semantic concepts and local keypoints for image annotation","authors":"Yousef Alqasrawi, D. Neagu","doi":"10.1109/UKCI.2014.6930165","DOIUrl":"https://doi.org/10.1109/UKCI.2014.6930165","url":null,"abstract":"The problem of image annotation has gained increasing attention from many researchers in computer vision. Few works have addressed the use of bag of visual words for scene annotation at region level. The aim of this paper is to study the relationship between the distribution of local semantic concepts and local keypoints located in image regions labelled with these semantic concepts. Based on this study, we investigate whether bag of visual words model can be used to efficiently represent the content of natural scene image regions, so images can be annotated with local semantic concepts. Also, this paper presents local from global approach which study the influence of using visual vocabularies generated from general scene categories to build bag of visual words at region level. Extensive experiments are conducted over a natural scene dataset with six categories. The reported results have shown the plausibility of using the BOW model to represent the semantic information of image regions.","PeriodicalId":315044,"journal":{"name":"2014 14th UK Workshop on Computational Intelligence (UKCI)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125776741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Kernel learning method for distance-based classification of categorical data 基于距离分类数据的核学习方法
Pub Date : 2014-10-20 DOI: 10.1109/UKCI.2014.6930159
Lifei Chen, G. Guo, Shengrui Wang, Xiangzeng Kong
Kernel-based methods have become popular in machine learning; however, they are typically designed for numeric data. These methods are established in vector spaces, which are undefined for categorical data. In this paper, we propose a new kind of kernel trick, showing that mapping of categorical samples into kernel spaces can be alternatively described as assigning a kernel-based weight to each categorical attribute of the input space, so that common distance measures can be employed. A data-driven approach is then proposed to kernel bandwidth selection by optimizing feature weights. We also make use of the kernel-based distance measure to effectively extend nearest-neighbor classification to classify categorical data. Experimental results on real-world data sets show the outstanding performance of this approach compared to that obtained in the original input space.
基于核的方法在机器学习中很流行;然而,它们通常是为数值数据设计的。这些方法是在向量空间中建立的,这对于分类数据是未定义的。在本文中,我们提出了一种新的核技巧,表明将分类样本映射到核空间可以被描述为为输入空间的每个分类属性分配一个基于核的权重,从而可以使用公共距离度量。然后提出了一种数据驱动的方法,通过优化特征权重来选择内核带宽。我们还利用基于核的距离度量来有效地扩展最近邻分类来对分类数据进行分类。在真实数据集上的实验结果表明,与在原始输入空间中获得的结果相比,该方法具有出色的性能。
{"title":"Kernel learning method for distance-based classification of categorical data","authors":"Lifei Chen, G. Guo, Shengrui Wang, Xiangzeng Kong","doi":"10.1109/UKCI.2014.6930159","DOIUrl":"https://doi.org/10.1109/UKCI.2014.6930159","url":null,"abstract":"Kernel-based methods have become popular in machine learning; however, they are typically designed for numeric data. These methods are established in vector spaces, which are undefined for categorical data. In this paper, we propose a new kind of kernel trick, showing that mapping of categorical samples into kernel spaces can be alternatively described as assigning a kernel-based weight to each categorical attribute of the input space, so that common distance measures can be employed. A data-driven approach is then proposed to kernel bandwidth selection by optimizing feature weights. We also make use of the kernel-based distance measure to effectively extend nearest-neighbor classification to classify categorical data. Experimental results on real-world data sets show the outstanding performance of this approach compared to that obtained in the original input space.","PeriodicalId":315044,"journal":{"name":"2014 14th UK Workshop on Computational Intelligence (UKCI)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129740265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Dynamic railway junction rescheduling using population based ant colony optimisation 基于蚁群优化的铁路枢纽动态调度
Pub Date : 2014-10-20 DOI: 10.1109/UKCI.2014.6930174
Jayne Eaton, Shengxiang Yang
Efficient rescheduling after a perturbation is an important concern of the railway industry. Extreme delays can result in large fines for the train company as well as dissatisfied customers. The problem is exacerbated by the fact that it is a dynamic one; more timetabled trains may be arriving as the perturbed trains are waiting to be rescheduled. The new trains may have different priorities to the existing trains and thus the rescheduling problem is a dynamic one that changes over time. The aim of this research is to apply a population-based ant colony optimisation algorithm to address this dynamic railway junction rescheduling problem using a simulator modelled on a real-world junction in the UK railway network. The results are promising: the algorithm performs well, particularly when the dynamic changes are of a high magnitude and frequency.
扰动后的有效重新调度是铁路行业关注的一个重要问题。严重的延误会给火车公司带来巨额罚款,也会让乘客感到不满。这个问题由于它是一个动态的问题而更加恶化;由于受干扰的列车正在等待重新安排,可能会有更多的列车到达。新列车可能与现有列车有不同的优先级,因此重新调度问题是一个随时间变化的动态问题。本研究的目的是应用基于种群的蚁群优化算法来解决这一动态铁路枢纽重新调度问题,使用模拟英国铁路网现实世界枢纽的模拟器。结果是有希望的:该算法表现良好,特别是当动态变化是高幅度和高频率时。
{"title":"Dynamic railway junction rescheduling using population based ant colony optimisation","authors":"Jayne Eaton, Shengxiang Yang","doi":"10.1109/UKCI.2014.6930174","DOIUrl":"https://doi.org/10.1109/UKCI.2014.6930174","url":null,"abstract":"Efficient rescheduling after a perturbation is an important concern of the railway industry. Extreme delays can result in large fines for the train company as well as dissatisfied customers. The problem is exacerbated by the fact that it is a dynamic one; more timetabled trains may be arriving as the perturbed trains are waiting to be rescheduled. The new trains may have different priorities to the existing trains and thus the rescheduling problem is a dynamic one that changes over time. The aim of this research is to apply a population-based ant colony optimisation algorithm to address this dynamic railway junction rescheduling problem using a simulator modelled on a real-world junction in the UK railway network. The results are promising: the algorithm performs well, particularly when the dynamic changes are of a high magnitude and frequency.","PeriodicalId":315044,"journal":{"name":"2014 14th UK Workshop on Computational Intelligence (UKCI)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121404824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Neural networks and wavelet transform in waveform approximation 神经网络与小波变换在波形逼近中的应用
Pub Date : 2014-10-20 DOI: 10.1109/UKCI.2014.6930164
P. Faragó, G. Oltean, L. Ivanciu
To fully analyze the time response of a complex system, in order to discover its critical operation points, the output waveform (under all conceivable conditions) needs to be generated. Using conventional methods as physical experiments or detailed simulations can be prohibitive from the resources point of view (time, equipment). The challenge is to generate the waveform by its numerous time samples as a function of different operating conditions described by a set of parameters. In this paper, we propose a fast to evaluate, but also accurate model that approximates the waveforms, as a reliable substitute for complex physical experiments or overwhelming system simulations. Our proposed model consists of two stages. In the first stage, a previously trained artificial neural network produces some coefficients standing for “primary” coefficients of a wavelet transform. In the second stage, an inverse wavelet transform generates all the time samples of the expected waveform, using a fusion between the “primary” coefficients and some “secondary” coefficients previously extracted from the nominal waveform in the family. The test results for a number of 100 different combinations of three waveform parameters show that our model is a reliable one, featuring high accuracy and generalization capabilities, as well as high computation speed.
为了充分分析复杂系统的时间响应,以发现其关键工作点,需要生成输出波形(在所有可能的条件下)。从资源(时间、设备)的角度来看,使用物理实验或详细模拟等传统方法可能是令人望而却步的。挑战在于通过其大量的时间样本作为一组参数描述的不同操作条件的函数来生成波形。在本文中,我们提出了一个快速评估,但也准确的模型,近似的波形,作为一个可靠的替代复杂的物理实验或压倒性的系统模拟。我们提出的模型包括两个阶段。在第一阶段,预先训练的人工神经网络产生一些代表小波变换“主”系数的系数。在第二阶段,利用“主”系数和先前从该族的标称波形中提取的一些“次”系数之间的融合,反小波变换生成期望波形的所有时间样本。对三种波形参数的100多种不同组合的测试结果表明,该模型是一种可靠的模型,具有较高的精度和泛化能力,计算速度快。
{"title":"Neural networks and wavelet transform in waveform approximation","authors":"P. Faragó, G. Oltean, L. Ivanciu","doi":"10.1109/UKCI.2014.6930164","DOIUrl":"https://doi.org/10.1109/UKCI.2014.6930164","url":null,"abstract":"To fully analyze the time response of a complex system, in order to discover its critical operation points, the output waveform (under all conceivable conditions) needs to be generated. Using conventional methods as physical experiments or detailed simulations can be prohibitive from the resources point of view (time, equipment). The challenge is to generate the waveform by its numerous time samples as a function of different operating conditions described by a set of parameters. In this paper, we propose a fast to evaluate, but also accurate model that approximates the waveforms, as a reliable substitute for complex physical experiments or overwhelming system simulations. Our proposed model consists of two stages. In the first stage, a previously trained artificial neural network produces some coefficients standing for “primary” coefficients of a wavelet transform. In the second stage, an inverse wavelet transform generates all the time samples of the expected waveform, using a fusion between the “primary” coefficients and some “secondary” coefficients previously extracted from the nominal waveform in the family. The test results for a number of 100 different combinations of three waveform parameters show that our model is a reliable one, featuring high accuracy and generalization capabilities, as well as high computation speed.","PeriodicalId":315044,"journal":{"name":"2014 14th UK Workshop on Computational Intelligence (UKCI)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124818151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A new weighting scheme and discriminative approach for information retrieval in static and dynamic document collections 静态和动态文档集合信息检索的一种新的权重方案和判别方法
Pub Date : 2014-10-20 DOI: 10.1109/UKCI.2014.6930160
O. Ibrahim, Dario Landa Silva
This paper introduces a new weighting scheme in information retrieval. It also proposes using the document centroid as a threshold for normalizing documents in a document collection. Document centroid normalization helps to achieve more effective information retrieval as it enables good discrimination between documents. In the context of a machine learning application, namely unsupervised document indexing and retrieval, we compared the effectiveness of the proposed weighting scheme to the `Term Frequency - Inverse Document Frequency' or TF-IDF, which is commonly used and considered as one of the best existing weighting schemes. The paper shows how the document centroid is used to remove less significant weights from documents and how this helps to achieve better retrieval effectiveness. Most of the existing weighting schemes in information retrieval research assume that the whole document collection is static. The results presented in this paper show that the proposed weighting scheme can produce higher retrieval effectiveness compared with the TF-IDF weighting scheme, in both static and dynamic document collections. The results also show the variation in information retrieval effectiveness that is achieved for static and dynamic document collections by using a specific weighting scheme. This type of comparison has not been presented in the literature before.
介绍了一种新的信息检索加权方案。它还建议使用文档质心作为规范化文档集合中的文档的阈值。文档质心规范化有助于实现更有效的信息检索,因为它可以很好地区分文档。在机器学习应用的背景下,即无监督文档索引和检索,我们将所提出的加权方案的有效性与“术语频率-逆文档频率”或TF-IDF进行了比较,TF-IDF是常用的,被认为是现有最好的加权方案之一。本文展示了如何使用文档质心从文档中去除不太重要的权重,以及如何帮助实现更好的检索效率。在信息检索研究中,现有的权重方案大多假设整个文档集合是静态的。结果表明,与TF-IDF加权方案相比,本文提出的加权方案在静态和动态文档集合中都能产生更高的检索效率。结果还显示了通过使用特定的加权方案来实现静态和动态文档集合的信息检索效率的差异。这种类型的比较在以前的文献中没有出现过。
{"title":"A new weighting scheme and discriminative approach for information retrieval in static and dynamic document collections","authors":"O. Ibrahim, Dario Landa Silva","doi":"10.1109/UKCI.2014.6930160","DOIUrl":"https://doi.org/10.1109/UKCI.2014.6930160","url":null,"abstract":"This paper introduces a new weighting scheme in information retrieval. It also proposes using the document centroid as a threshold for normalizing documents in a document collection. Document centroid normalization helps to achieve more effective information retrieval as it enables good discrimination between documents. In the context of a machine learning application, namely unsupervised document indexing and retrieval, we compared the effectiveness of the proposed weighting scheme to the `Term Frequency - Inverse Document Frequency' or TF-IDF, which is commonly used and considered as one of the best existing weighting schemes. The paper shows how the document centroid is used to remove less significant weights from documents and how this helps to achieve better retrieval effectiveness. Most of the existing weighting schemes in information retrieval research assume that the whole document collection is static. The results presented in this paper show that the proposed weighting scheme can produce higher retrieval effectiveness compared with the TF-IDF weighting scheme, in both static and dynamic document collections. The results also show the variation in information retrieval effectiveness that is achieved for static and dynamic document collections by using a specific weighting scheme. This type of comparison has not been presented in the literature before.","PeriodicalId":315044,"journal":{"name":"2014 14th UK Workshop on Computational Intelligence (UKCI)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124032086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
PCA-based algorithmic approximation of crisp target sets 基于pca的脆靶集逼近算法
Pub Date : 2014-10-20 DOI: 10.1109/UKCI.2014.6930182
Ray-Ming Chen
Principal Component Analysis (PCA) is an important technique in finding uncorrelated variables. It is applied in many fields: machine learning, pattern recognition, data mining, compression, ..., etc. In this paper, we introduce this technique into approximation reasoning. Before the introduction, we construct a theoretical framework of such approximation first. This approximation is based on reasoning of incomplete information in which there exists no algorithm such that the intersection between arbitrary target sets and partitioned clusters is decidable, while there exist some algorithms for the decidability of the subset operation between them. Then, under this framework, we utilize PCA to implement such approximation reasoning. PCA is mainly applied to partitioning a universe repeatedly until all the partitioned sets are singular or indecomposable. Then we collect all the partitioned clusters as the granular knowledge and then use this knowledge to approximate the target set.
主成分分析(PCA)是发现不相关变量的重要方法。它被应用于许多领域:机器学习、模式识别、数据挖掘、压缩……等。在本文中,我们将这种技术引入到近似推理中。在介绍之前,我们首先构建了这种近似的理论框架。该近似基于不完全信息推理,其中不存在任意目标集与划分聚类相交可判定的算法,而存在它们之间子集运算可判定的算法。然后,在此框架下,我们利用PCA来实现这种近似推理。主成分分析主要用于对整个域进行重复划分,直到所有划分集都是奇异或不可分解的。然后我们将所有被分割的聚类集合作为颗粒知识,然后使用这些知识来近似目标集。
{"title":"PCA-based algorithmic approximation of crisp target sets","authors":"Ray-Ming Chen","doi":"10.1109/UKCI.2014.6930182","DOIUrl":"https://doi.org/10.1109/UKCI.2014.6930182","url":null,"abstract":"Principal Component Analysis (PCA) is an important technique in finding uncorrelated variables. It is applied in many fields: machine learning, pattern recognition, data mining, compression, ..., etc. In this paper, we introduce this technique into approximation reasoning. Before the introduction, we construct a theoretical framework of such approximation first. This approximation is based on reasoning of incomplete information in which there exists no algorithm such that the intersection between arbitrary target sets and partitioned clusters is decidable, while there exist some algorithms for the decidability of the subset operation between them. Then, under this framework, we utilize PCA to implement such approximation reasoning. PCA is mainly applied to partitioning a universe repeatedly until all the partitioned sets are singular or indecomposable. Then we collect all the partitioned clusters as the granular knowledge and then use this knowledge to approximate the target set.","PeriodicalId":315044,"journal":{"name":"2014 14th UK Workshop on Computational Intelligence (UKCI)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131058563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2014 14th UK Workshop on Computational Intelligence (UKCI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1