首页 > 最新文献

2013 12th Mexican International Conference on Artificial Intelligence最新文献

英文 中文
Using Different Cost Functions to Train Stacked Auto-Encoders 使用不同代价函数训练堆叠自编码器
Pub Date : 2013-11-24 DOI: 10.1109/MICAI.2013.20
Telmo Amaral, Luís M. Silva, Luís A. Alexandre, Chetak Kandaswamy, Jorge M. Santos, J. M. D. Sá
Deep neural networks comprise several hidden layers of units, which can be pre-trained one at a time via an unsupervised greedy approach. A whole network can then be trained (fine-tuned) in a supervised fashion. One possible pre-training strategy is to regard each hidden layer in the network as the input layer of an auto-encoder. Since auto-encoders aim to reconstruct their own input, their training must be based on some cost function capable of measuring reconstruction performance. Similarly, the supervised fine-tuning of a deep network needs to be based on some cost function that reflects prediction performance. In this work we compare different combinations of cost functions in terms of their impact on layer-wise reconstruction performance and on supervised classification performance of deep networks. We employed two classic functions, namely the cross-entropy (CE) cost and the sum of squared errors (SSE), as well as the exponential (EXP) cost, inspired by the error entropy concept. Our results were based on a number of artificial and real-world data sets.
深度神经网络包含几个隐藏的单元层,可以通过无监督贪婪方法一次预训练一个。然后,整个网络可以以监督的方式进行训练(微调)。一种可能的预训练策略是将网络中的每个隐藏层视为自编码器的输入层。由于自编码器的目标是重建自己的输入,它们的训练必须基于一些能够衡量重建性能的成本函数。类似地,深度网络的监督微调需要基于一些反映预测性能的成本函数。在这项工作中,我们比较了不同的成本函数组合对分层重建性能和深度网络的监督分类性能的影响。我们采用了两个经典函数,即交叉熵(CE)代价和误差平方和(SSE)代价,以及受误差熵概念启发的指数代价(EXP)代价。我们的结果是基于许多人工和现实世界的数据集。
{"title":"Using Different Cost Functions to Train Stacked Auto-Encoders","authors":"Telmo Amaral, Luís M. Silva, Luís A. Alexandre, Chetak Kandaswamy, Jorge M. Santos, J. M. D. Sá","doi":"10.1109/MICAI.2013.20","DOIUrl":"https://doi.org/10.1109/MICAI.2013.20","url":null,"abstract":"Deep neural networks comprise several hidden layers of units, which can be pre-trained one at a time via an unsupervised greedy approach. A whole network can then be trained (fine-tuned) in a supervised fashion. One possible pre-training strategy is to regard each hidden layer in the network as the input layer of an auto-encoder. Since auto-encoders aim to reconstruct their own input, their training must be based on some cost function capable of measuring reconstruction performance. Similarly, the supervised fine-tuning of a deep network needs to be based on some cost function that reflects prediction performance. In this work we compare different combinations of cost functions in terms of their impact on layer-wise reconstruction performance and on supervised classification performance of deep networks. We employed two classic functions, namely the cross-entropy (CE) cost and the sum of squared errors (SSE), as well as the exponential (EXP) cost, inspired by the error entropy concept. Our results were based on a number of artificial and real-world data sets.","PeriodicalId":340039,"journal":{"name":"2013 12th Mexican International Conference on Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116679175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Analysis and Transformation of Textual Energy Distribution 文本能量分布的分析与转换
Pub Date : 2013-11-24 DOI: 10.1109/MICAI.2013.32
Alejandro Molina-Villegas, Juan-Manuel Torres-Moreno, E. SanJuan, Gerardo E Sierra, Julio Rojas-Mora
In this paper we revisit the Textual Energy model. We deal with the two major disadvantages of the Textual Energy: the asymmetry of the distribution and the unbounded ness of the maximum value. Although this model has been successfully used in several NLP tasks like summarization, clustering and sentence compression, no correction of these problems has been proposed until now. Concerning the maximum value, we analyze the computation of Textual Energy matrix and we conclude that energy values are dominated by the lexical richness in quadratic growth of the vocabulary size. Using the Box-Cox transformation, we show empirical evidence that a log transformation could correct both problems.
本文回顾了文本能量模型。本文讨论了文本能量分布的不对称性和最大值的无界性这两大缺点。尽管该模型已经成功地应用于总结、聚类和句子压缩等NLP任务中,但到目前为止还没有提出对这些问题的修正。对于最大值,我们分析了文本能量矩阵的计算,得出在词汇量的二次增长中,能量值受词汇丰富度的支配。使用Box-Cox变换,我们展示了经验证据,证明对数变换可以纠正这两个问题。
{"title":"Analysis and Transformation of Textual Energy Distribution","authors":"Alejandro Molina-Villegas, Juan-Manuel Torres-Moreno, E. SanJuan, Gerardo E Sierra, Julio Rojas-Mora","doi":"10.1109/MICAI.2013.32","DOIUrl":"https://doi.org/10.1109/MICAI.2013.32","url":null,"abstract":"In this paper we revisit the Textual Energy model. We deal with the two major disadvantages of the Textual Energy: the asymmetry of the distribution and the unbounded ness of the maximum value. Although this model has been successfully used in several NLP tasks like summarization, clustering and sentence compression, no correction of these problems has been proposed until now. Concerning the maximum value, we analyze the computation of Textual Energy matrix and we conclude that energy values are dominated by the lexical richness in quadratic growth of the vocabulary size. Using the Box-Cox transformation, we show empirical evidence that a log transformation could correct both problems.","PeriodicalId":340039,"journal":{"name":"2013 12th Mexican International Conference on Artificial Intelligence","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128893417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Bird-Like Information Processing for AI-based Pattern Recognition 基于人工智能模式识别的类鸟信息处理
Pub Date : 2013-11-24 DOI: 10.1109/MICAI.2013.27
T. Pham
Artificial-intelligence (AI)-base pattern recognition is of particular interests to many scientific disciplines ranging from life science to engineering. Practical applications of pattern or object recognition methods are numerous but still encountering many problems including the inherent difficulty in computerized feature extraction and classification. This paper proposes a strategy for object recognition resembling the active template matching strategy in birds. Experimental results on several databases suggest that using the active vision processing can improve classification rates implemented with various classifiers.
基于人工智能(AI)的模式识别是许多科学学科特别感兴趣的领域,从生命科学到工程学。模式或目标识别方法的实际应用很多,但仍然遇到许多问题,包括计算机特征提取和分类的固有困难。本文提出了一种类似于鸟类主动模板匹配策略的目标识别策略。在多个数据库上的实验结果表明,使用主动视觉处理可以提高各种分类器的分类率。
{"title":"Bird-Like Information Processing for AI-based Pattern Recognition","authors":"T. Pham","doi":"10.1109/MICAI.2013.27","DOIUrl":"https://doi.org/10.1109/MICAI.2013.27","url":null,"abstract":"Artificial-intelligence (AI)-base pattern recognition is of particular interests to many scientific disciplines ranging from life science to engineering. Practical applications of pattern or object recognition methods are numerous but still encountering many problems including the inherent difficulty in computerized feature extraction and classification. This paper proposes a strategy for object recognition resembling the active template matching strategy in birds. Experimental results on several databases suggest that using the active vision processing can improve classification rates implemented with various classifiers.","PeriodicalId":340039,"journal":{"name":"2013 12th Mexican International Conference on Artificial Intelligence","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130696679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of Semantic Similarity across MeSH Ontology: A Cairo University Thesis Mining Case Study 跨MeSH本体的语义相似度评估:开罗大学论文挖掘案例研究
Pub Date : 2013-11-24 DOI: 10.1109/MICAI.2013.24
Heba Ayeldeen, A. Hassanien, A. Fahmy
Knowledge exaction and text representation are considered as the main concepts concerning organizations nowadays. The estimation of the semantic similarity between words provides a valuable method to enable the understanding of texts. In the field of biomedical domains, using Ontologies have been very effective due to their scalability and efficiency. The problem of extracting knowledge from huge amount of data is recorded as an issue in the medical sector. In this paper, we aim to improve knowledge representation by using MeSH Ontology on medical theses data by analyzing the similarity between the keywords within the theses data and keywords after using the MeSH ontology. As a result, we are able to better discover the commonalities between theses data and hence, improve the accuracy of the similarity estimation which in return improves the scientific research sector. Then, K-means cluster algorithm was applied to get the nearest departments that can work together based on medical ontology. Experimental evaluations using 4, 878 theses data set in the medical sector at Cairo University indicate that the proposed approach yields results that correlate more closely with human assessments than other by using the standard ontology (MeSH). Results show that using ontology correlates better, compared to related works, with the similarity assessments provided by experts in biomedicine.
知识抽取和文本表示是当今组织的主要概念。词间语义相似度的估计为文本的理解提供了一种有价值的方法。在生物医学领域,本体的应用由于其可扩展性和高效性而非常有效。从海量数据中提取知识的问题被记录为医疗部门的一个问题。本文旨在通过分析医学论文数据中关键词与使用MeSH本体后关键词的相似度,提高医学论文数据的MeSH本体知识表示能力。因此,我们能够更好地发现这些数据之间的共性,从而提高相似性估计的准确性,从而提高科研部门的水平。然后,基于医学本体,应用K-means聚类算法得到最近的可以一起工作的科室;使用开罗大学医疗部门的4878篇论文数据集进行的实验评估表明,与使用标准本体(MeSH)的其他方法相比,所提出的方法产生的结果与人类评估的相关性更密切。结果表明,与相关文献相比,使用本体与生物医学专家提供的相似度评价具有更好的相关性。
{"title":"Evaluation of Semantic Similarity across MeSH Ontology: A Cairo University Thesis Mining Case Study","authors":"Heba Ayeldeen, A. Hassanien, A. Fahmy","doi":"10.1109/MICAI.2013.24","DOIUrl":"https://doi.org/10.1109/MICAI.2013.24","url":null,"abstract":"Knowledge exaction and text representation are considered as the main concepts concerning organizations nowadays. The estimation of the semantic similarity between words provides a valuable method to enable the understanding of texts. In the field of biomedical domains, using Ontologies have been very effective due to their scalability and efficiency. The problem of extracting knowledge from huge amount of data is recorded as an issue in the medical sector. In this paper, we aim to improve knowledge representation by using MeSH Ontology on medical theses data by analyzing the similarity between the keywords within the theses data and keywords after using the MeSH ontology. As a result, we are able to better discover the commonalities between theses data and hence, improve the accuracy of the similarity estimation which in return improves the scientific research sector. Then, K-means cluster algorithm was applied to get the nearest departments that can work together based on medical ontology. Experimental evaluations using 4, 878 theses data set in the medical sector at Cairo University indicate that the proposed approach yields results that correlate more closely with human assessments than other by using the standard ontology (MeSH). Results show that using ontology correlates better, compared to related works, with the similarity assessments provided by experts in biomedicine.","PeriodicalId":340039,"journal":{"name":"2013 12th Mexican International Conference on Artificial Intelligence","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133766305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Malware Classification Using Euclidean Distance and Artificial Neural Networks 基于欧氏距离和人工神经网络的恶意软件分类
Pub Date : 2013-11-24 DOI: 10.1109/MICAI.2013.18
Lilia E. Gonzalez, R. Vázquez
Most of the samples discovered are variations of known malicious programs and thus have similar structures, however, there is no method of malware classification that is completely effective. To address this issue, the approach proposed in this paper represents a malware in terms of a vector, in which each feature consists of the amount of APIs called from a Dynamic Link Library (DLL). To determine if this approach is useful to classify malware variants into the correct families, we employ Euclidean Distance and a Multilayer Perceptron with several learning algorithms. The experimental results are analyzed to determine which method works best with the approach. The experiments were conducted with a database that contains real samples of worms and trojans and show that is possible to classify malware variants using the number of functions imported per library. However, the accuracy varies depending on the method used for the classification.
发现的大多数样本都是已知恶意程序的变体,因此具有相似的结构,然而,没有一种完全有效的恶意软件分类方法。为了解决这个问题,本文提出的方法用向量表示恶意软件,其中每个特征由从动态链接库(DLL)调用的api数量组成。为了确定这种方法是否有助于将恶意软件变体分类到正确的家族中,我们使用了欧几里得距离和带有几种学习算法的多层感知器。对实验结果进行了分析,以确定哪种方法最适合该方法。实验是在一个包含蠕虫和木马真实样本的数据库中进行的,并表明可以使用每个库导入的函数数量来分类恶意软件变体。然而,准确度取决于用于分类的方法。
{"title":"Malware Classification Using Euclidean Distance and Artificial Neural Networks","authors":"Lilia E. Gonzalez, R. Vázquez","doi":"10.1109/MICAI.2013.18","DOIUrl":"https://doi.org/10.1109/MICAI.2013.18","url":null,"abstract":"Most of the samples discovered are variations of known malicious programs and thus have similar structures, however, there is no method of malware classification that is completely effective. To address this issue, the approach proposed in this paper represents a malware in terms of a vector, in which each feature consists of the amount of APIs called from a Dynamic Link Library (DLL). To determine if this approach is useful to classify malware variants into the correct families, we employ Euclidean Distance and a Multilayer Perceptron with several learning algorithms. The experimental results are analyzed to determine which method works best with the approach. The experiments were conducted with a database that contains real samples of worms and trojans and show that is possible to classify malware variants using the number of functions imported per library. However, the accuracy varies depending on the method used for the classification.","PeriodicalId":340039,"journal":{"name":"2013 12th Mexican International Conference on Artificial Intelligence","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131405452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Approach Towards a Natural Language Analysis for Diagnosing Mood Disorders and Comorbid Conditions 用于诊断情绪障碍和合并症的自然语言分析方法
Pub Date : 2013-11-24 DOI: 10.1109/MICAI.2013.50
N. Howard
Here we propose an approach for developing a diagnosis system for mood disorders, such as depression and bipolar disorder, based on language analysis from speech and text. Our system is based on the Mood State Indicator algorithm (MSI) for real-time analysis of a patient's mental state. MSI is designed to give a quantitative measure of cognitive state based on axiological values and time orientation of lexical features. MSI's multi-layered analytic engine consists of multiple information processing modules to systematically retrieve, parse and process features of a patient's discourse. Gold standard clinical criteria will be used to match language analysis indicators to mood disorder diagnosis.
在此,我们提出了一种基于语音和文本的语言分析来开发情绪障碍(如抑郁症和双相情感障碍)诊断系统的方法。我们的系统基于情绪状态指示器算法(MSI),用于实时分析患者的精神状态。MSI是一种基于词汇特征的价值论价值和时间取向的认知状态定量度量。MSI的多层分析引擎由多个信息处理模块组成,系统地检索、解析和处理患者话语特征。金标准临床标准将用于将语言分析指标与情绪障碍诊断相匹配。
{"title":"Approach Towards a Natural Language Analysis for Diagnosing Mood Disorders and Comorbid Conditions","authors":"N. Howard","doi":"10.1109/MICAI.2013.50","DOIUrl":"https://doi.org/10.1109/MICAI.2013.50","url":null,"abstract":"Here we propose an approach for developing a diagnosis system for mood disorders, such as depression and bipolar disorder, based on language analysis from speech and text. Our system is based on the Mood State Indicator algorithm (MSI) for real-time analysis of a patient's mental state. MSI is designed to give a quantitative measure of cognitive state based on axiological values and time orientation of lexical features. MSI's multi-layered analytic engine consists of multiple information processing modules to systematically retrieve, parse and process features of a patient's discourse. Gold standard clinical criteria will be used to match language analysis indicators to mood disorder diagnosis.","PeriodicalId":340039,"journal":{"name":"2013 12th Mexican International Conference on Artificial Intelligence","volume":"7 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124278913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Quantifiers Types Resolution in NL Software Requirements NL软件需求中的量词类型解析
Pub Date : 2013-11-24 DOI: 10.1109/MICAI.2013.49
Mehreen Saba, Imran Sarwar Bajwa
Natural language quantifiers can be classified according to their semantic type in addition to their syntactic expression. Quantification in Natural language (NL) has two types, ambiguous quantification and Unambiguous quantification. Unambiguous quantification is very simple and also called exact quantification, but ambiguous quantification is complex and also called inexact quantification. Inexact quantifiers include "many, much, a lot of, several, some, any, a few, little, fewer, fewest, Less, greater, at least, at most, more, exactly". To identify the problems of Natural language Quantification, convert these Natural Language sentences into First order logic by attaching weights and classify these complex sentences by using Markov Logic.
自然语言的量词除了根据其句法表达,还可以根据其语义类型进行分类。自然语言中的量化有二义性量化和无二义性量化两种。无二义量化很简单,也叫精确量化,而模糊量化很复杂,也叫不精确量化。不精确量词包括“many, much, lot of,几,some, any, a few, little, fewer, fewest, Less, greater, least, at most, more, exactly”。为了识别自然语言量化的问题,通过附加权重将这些自然语言句子转换为一阶逻辑,并使用马尔可夫逻辑对这些复句进行分类。
{"title":"Quantifiers Types Resolution in NL Software Requirements","authors":"Mehreen Saba, Imran Sarwar Bajwa","doi":"10.1109/MICAI.2013.49","DOIUrl":"https://doi.org/10.1109/MICAI.2013.49","url":null,"abstract":"Natural language quantifiers can be classified according to their semantic type in addition to their syntactic expression. Quantification in Natural language (NL) has two types, ambiguous quantification and Unambiguous quantification. Unambiguous quantification is very simple and also called exact quantification, but ambiguous quantification is complex and also called inexact quantification. Inexact quantifiers include \"many, much, a lot of, several, some, any, a few, little, fewer, fewest, Less, greater, at least, at most, more, exactly\". To identify the problems of Natural language Quantification, convert these Natural Language sentences into First order logic by attaching weights and classify these complex sentences by using Markov Logic.","PeriodicalId":340039,"journal":{"name":"2013 12th Mexican International Conference on Artificial Intelligence","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116518757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using a Model of the Cochlea Based in the Micro and Macro Mechanical to Find Parameters for Automatic Speech Recognition 基于宏微观力学的耳蜗模型在语音自动识别中的应用
Pub Date : 2013-11-24 DOI: 10.1109/MICAI.2013.39
J. Rodríguez, Jose Francisco Reyes Saldana
Recently the parametric representation using cochlea behavior has been used in different studies related with Automatic Speech Recognition (ASR). That is because this important organ of the hearing in the mammalians is the principal element used to make a transduction of the sound pressure that is received by the ear. In this paper we show how the macro and micro mechanical model is used in ASR tasks. We used the values that Neely founded in his work, related with the macro and micro mechanical model, such as was named, to set the central frequencies of a bank filter to obtain parameters from the speech used in a similar form as MFCC were constructed. We propose a new approach that considers a new form to construct the bank filter in our parametric representation. Then we used this distribution of the bank filter to have a new representation of the speech in frequency domain. It is important indicate that MFCC parameters use Mel scale to create a bank filter where central frequencies of each filter is in function of the scale mentioned above. We used the response of the Neely's model behavior to create the central frequencies of the bank filter mentioned above, then we substitute the Mel scale function by another representation. We use the place theory, and we reach a 98.5% of performance, for a task that uses isolated digits pronounced by 5 different speakers. Neely's model was used because a set of parameters of the cochlea as mass, damping and stiffness, among others, when are substituted inside the model make the response obtained is closer than von Békésy proposed in his preliminary work about principle function of the cochlea.
近年来,基于耳蜗行为的参数化表征已被应用于自动语音识别的相关研究中。这是因为在哺乳动物中,这个重要的听觉器官是将耳朵接收到的声压进行转导的主要因素。在本文中,我们展示了宏观和微观力学模型是如何在ASR任务中使用的。我们使用了Neely在他的工作中建立的与宏观和微观力学模型相关的值,例如命名的值,来设置银行滤波器的中心频率,以类似于构建MFCC的形式从语音中获取参数。我们提出了一种新的方法,在我们的参数表示中考虑一种新的形式来构造银行滤波器。然后我们使用这个组滤波器的分布在频域中得到一个新的语音表示。重要的是,MFCC参数使用Mel尺度来创建一个组滤波器,其中每个滤波器的中心频率是上述尺度的函数。我们使用尼利模型行为的响应来创建上面提到的银行滤波器的中心频率,然后我们用另一种表示代替梅尔尺度函数。我们使用位置理论,我们达到了98.5%的性能,对于一个任务,使用独立的数字由5个不同的说话者读出。之所以采用Neely的模型,是因为在模型内代入耳蜗的质量、阻尼、刚度等一组参数,得到的响应比von bsamksamsy在其关于耳蜗主要功能的初步工作中提出的更接近。
{"title":"Using a Model of the Cochlea Based in the Micro and Macro Mechanical to Find Parameters for Automatic Speech Recognition","authors":"J. Rodríguez, Jose Francisco Reyes Saldana","doi":"10.1109/MICAI.2013.39","DOIUrl":"https://doi.org/10.1109/MICAI.2013.39","url":null,"abstract":"Recently the parametric representation using cochlea behavior has been used in different studies related with Automatic Speech Recognition (ASR). That is because this important organ of the hearing in the mammalians is the principal element used to make a transduction of the sound pressure that is received by the ear. In this paper we show how the macro and micro mechanical model is used in ASR tasks. We used the values that Neely founded in his work, related with the macro and micro mechanical model, such as was named, to set the central frequencies of a bank filter to obtain parameters from the speech used in a similar form as MFCC were constructed. We propose a new approach that considers a new form to construct the bank filter in our parametric representation. Then we used this distribution of the bank filter to have a new representation of the speech in frequency domain. It is important indicate that MFCC parameters use Mel scale to create a bank filter where central frequencies of each filter is in function of the scale mentioned above. We used the response of the Neely's model behavior to create the central frequencies of the bank filter mentioned above, then we substitute the Mel scale function by another representation. We use the place theory, and we reach a 98.5% of performance, for a task that uses isolated digits pronounced by 5 different speakers. Neely's model was used because a set of parameters of the cochlea as mass, damping and stiffness, among others, when are substituted inside the model make the response obtained is closer than von Békésy proposed in his preliminary work about principle function of the cochlea.","PeriodicalId":340039,"journal":{"name":"2013 12th Mexican International Conference on Artificial Intelligence","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125411282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Applicability of Cluster Validation Indexes for Large Data Sets 大数据集聚类验证索引的适用性
Pub Date : 2013-11-24 DOI: 10.1109/MICAI.2013.30
M. Santibáñez, R. M. Valdovinos, Adrián Trueba, Eréndira Rendón Lara, R. Alejo, E. López
Over time, it has been found there is valuable information within the data sets generated into different areas. These large data sets required to be processed with any data mining technique to get the hidden knowledge inside them. Due to nowadays many of data sets are integrated with a big number of instances and they do not have any information that can describe them, is necessary to use data mining methods such as clustering so it can permit to lump together the data according to its characteristics. Although there are algorithms that have good results with small or medium size data sets, they can provide poor results when they work with large data sets. Due to above mentioned in this paper we propose to use different cluster validation methods to determine clustering quality, as its analysis, so at the same time to determine in an empiric way the more reliable rates for working with large data sets.
随着时间的推移,人们发现在不同领域生成的数据集中存在有价值的信息。这些大型数据集需要使用任何数据挖掘技术来处理,以获得其中隐藏的知识。由于目前许多数据集都是由大量的实例组成的,并且没有任何可以描述它们的信息,因此有必要使用聚类等数据挖掘方法,以便根据数据的特征将数据集中在一起。虽然有些算法在处理小型或中型数据集时效果很好,但在处理大型数据集时可能会提供很差的结果。由于以上所述,在本文中我们建议使用不同的聚类验证方法来确定聚类质量,作为其分析,因此同时以经验的方式确定更可靠的率用于处理大型数据集。
{"title":"Applicability of Cluster Validation Indexes for Large Data Sets","authors":"M. Santibáñez, R. M. Valdovinos, Adrián Trueba, Eréndira Rendón Lara, R. Alejo, E. López","doi":"10.1109/MICAI.2013.30","DOIUrl":"https://doi.org/10.1109/MICAI.2013.30","url":null,"abstract":"Over time, it has been found there is valuable information within the data sets generated into different areas. These large data sets required to be processed with any data mining technique to get the hidden knowledge inside them. Due to nowadays many of data sets are integrated with a big number of instances and they do not have any information that can describe them, is necessary to use data mining methods such as clustering so it can permit to lump together the data according to its characteristics. Although there are algorithms that have good results with small or medium size data sets, they can provide poor results when they work with large data sets. Due to above mentioned in this paper we propose to use different cluster validation methods to determine clustering quality, as its analysis, so at the same time to determine in an empiric way the more reliable rates for working with large data sets.","PeriodicalId":340039,"journal":{"name":"2013 12th Mexican International Conference on Artificial Intelligence","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128752236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Kinesthetic Guided with Graphotherapeutic Purposes 动觉引导与图形治疗目的
Pub Date : 2013-11-24 DOI: 10.1109/MICAI.2013.45
Alejandro Jarillo-Silva, O. Dominguez-Ramirez, J. A. Cruz-Tolentino, L. E. R. Velasco, Vicente Parra‐Vega
This paper presents the design, construction and implementation of a calligraphic platform with biomedical applications. This technological tool could be employed in physiotherapy to recover the loss of calligraphic abilities caused by common psychomotor disorders such as dyslexia and brain stroke. The experimental platform allow to define the motion performance (physical interaction variables), in particular on upper limbs. the patient is guided through the end effector of a haptic device; to this end is used a nonlinear control in closed loop with the human operator, with language symbols as a trajectory tracking. This allows that the user can be a passive human, so the control law designed is based on passivity theory and sliding mode to achieve stability and security in human machine interaction. The haptic system described, is designed to improve the physiotherapeutic tasks by supplying the motion measurement (position/velocity) and its errors. Preliminary tests using this novel system demonstrated a significative influence on regain functions in patients with psychomotor disorders.
本文介绍了一个具有生物医学应用的书法平台的设计、构建和实现。这种技术工具可以用于物理治疗,以恢复由于阅读障碍和脑中风等常见精神运动障碍而导致的书法能力丧失。实验平台允许定义运动性能(物理交互变量),特别是在上肢。引导患者通过触觉装置的末端执行器;为此,采用了一种非线性闭环控制方法,由人工操作,以语言符号作为轨迹跟踪。这使得用户可以是一个被动的人,因此设计的控制律基于被动理论和滑模,以实现人机交互的稳定性和安全性。所描述的触觉系统旨在通过提供运动测量(位置/速度)及其误差来改善物理治疗任务。使用这种新系统的初步测试表明,对精神运动障碍患者的功能恢复有显著影响。
{"title":"Kinesthetic Guided with Graphotherapeutic Purposes","authors":"Alejandro Jarillo-Silva, O. Dominguez-Ramirez, J. A. Cruz-Tolentino, L. E. R. Velasco, Vicente Parra‐Vega","doi":"10.1109/MICAI.2013.45","DOIUrl":"https://doi.org/10.1109/MICAI.2013.45","url":null,"abstract":"This paper presents the design, construction and implementation of a calligraphic platform with biomedical applications. This technological tool could be employed in physiotherapy to recover the loss of calligraphic abilities caused by common psychomotor disorders such as dyslexia and brain stroke. The experimental platform allow to define the motion performance (physical interaction variables), in particular on upper limbs. the patient is guided through the end effector of a haptic device; to this end is used a nonlinear control in closed loop with the human operator, with language symbols as a trajectory tracking. This allows that the user can be a passive human, so the control law designed is based on passivity theory and sliding mode to achieve stability and security in human machine interaction. The haptic system described, is designed to improve the physiotherapeutic tasks by supplying the motion measurement (position/velocity) and its errors. Preliminary tests using this novel system demonstrated a significative influence on regain functions in patients with psychomotor disorders.","PeriodicalId":340039,"journal":{"name":"2013 12th Mexican International Conference on Artificial Intelligence","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132406584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2013 12th Mexican International Conference on Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1