Juan Pablo Munoz Toriz, I. M. Ruiz, José Ramón Enrique Arrazola-Ramírez
In this paper, we describe the development of a series of automatic theorem provers for a variety of logics. Provers are developed from a functional approach. The first prover is for Classical Propositional Calculus (CPC), which is based on a constructive proof of Kalmar's Theorem. We also provide the implementation of a cut and contraction free sequent calculus for Intuitionistic Propositional Logic (IPC). Next, it is introduced a prover for ALCS4, which is the description logic ALC with transitive and reflexive roles only. This prover is also based on a cut and contraction free sequent calculus. We also provide a complexity analysis for each prover.
{"title":"On Automatic Theorem Proving with ML","authors":"Juan Pablo Munoz Toriz, I. M. Ruiz, José Ramón Enrique Arrazola-Ramírez","doi":"10.1109/MICAI.2014.42","DOIUrl":"https://doi.org/10.1109/MICAI.2014.42","url":null,"abstract":"In this paper, we describe the development of a series of automatic theorem provers for a variety of logics. Provers are developed from a functional approach. The first prover is for Classical Propositional Calculus (CPC), which is based on a constructive proof of Kalmar's Theorem. We also provide the implementation of a cut and contraction free sequent calculus for Intuitionistic Propositional Logic (IPC). Next, it is introduced a prover for ALCS4, which is the description logic ALC with transitive and reflexive roles only. This prover is also based on a cut and contraction free sequent calculus. We also provide a complexity analysis for each prover.","PeriodicalId":189896,"journal":{"name":"2014 13th Mexican International Conference on Artificial Intelligence","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128441709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Trejo, G. Sidorov, Marco Moreno, Sabino Miranda-Jiménez, Rodrigo Cadena Martínez
In classification tasks one of the main problems is to choose which features provide best results, i.e., Construct a vector space model. In this paper, we show how to complement traditional vector space model with the concept of soft similarity. We use the combination of the traditional tf-idf model with latent Dirichlet allocation applied in multi-label classification. We considered multi-label files of the Reuters-21578 corpus as study case. The methodology is evaluated using the multi-label algorithm Rakell. We used the traditional tf-idf model as the baseline. We present the F1 measures for both models for various feature sets, preprocessing techniques and vector sizes. The new model obtains better results than the base line model.
{"title":"Using Soft Similarity in Multi-label Classification for Reuters-21578 Corpus","authors":"J. Trejo, G. Sidorov, Marco Moreno, Sabino Miranda-Jiménez, Rodrigo Cadena Martínez","doi":"10.1109/MICAI.2014.7","DOIUrl":"https://doi.org/10.1109/MICAI.2014.7","url":null,"abstract":"In classification tasks one of the main problems is to choose which features provide best results, i.e., Construct a vector space model. In this paper, we show how to complement traditional vector space model with the concept of soft similarity. We use the combination of the traditional tf-idf model with latent Dirichlet allocation applied in multi-label classification. We considered multi-label files of the Reuters-21578 corpus as study case. The methodology is evaluated using the multi-label algorithm Rakell. We used the traditional tf-idf model as the baseline. We present the F1 measures for both models for various feature sets, preprocessing techniques and vector sizes. The new model obtains better results than the base line model.","PeriodicalId":189896,"journal":{"name":"2014 13th Mexican International Conference on Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129333343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Alba-Cuellar, A. Zavala, A. H. Aguirre, E. E. P. D. L. Sentí, E. Díaz-Díaz
In this paper, we propose a new methodology to forecast values for univariate time series datasets, based on a Feed Forward Neural Network (FFNN) ensemble. Each ensemble element is trained with the Particle Swarm Optimization (PSO) algorithm, this ensemble produces a final sequence of time series forecasts via a bootstrapping procedure. Our proposed methodology is compared against Auto-Regressive Integrated Moving Average (ARIMA) models. This experiment gives us a good idea of how effective soft computing techniques can be in the field of time series modeling. The results obtained show empirically that our proposed methodology is robust and produces useful forecast error bounds that provide a clear picture of a time series' future movements.
{"title":"Time Series Forecasting with PSO-Optimized Neural Networks","authors":"Daniel Alba-Cuellar, A. Zavala, A. H. Aguirre, E. E. P. D. L. Sentí, E. Díaz-Díaz","doi":"10.1109/MICAI.2014.22","DOIUrl":"https://doi.org/10.1109/MICAI.2014.22","url":null,"abstract":"In this paper, we propose a new methodology to forecast values for univariate time series datasets, based on a Feed Forward Neural Network (FFNN) ensemble. Each ensemble element is trained with the Particle Swarm Optimization (PSO) algorithm, this ensemble produces a final sequence of time series forecasts via a bootstrapping procedure. Our proposed methodology is compared against Auto-Regressive Integrated Moving Average (ARIMA) models. This experiment gives us a good idea of how effective soft computing techniques can be in the field of time series modeling. The results obtained show empirically that our proposed methodology is robust and produces useful forecast error bounds that provide a clear picture of a time series' future movements.","PeriodicalId":189896,"journal":{"name":"2014 13th Mexican International Conference on Artificial Intelligence","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129841429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Islam Elhalwany, Ammar Mohammed, K. Wassif, H. Hefny
One of the successful approaches for developing TCBR applications is SOPHisticated Information Analysis (SOPHIA), which is distinguished by its ability to work without prior knowledge engineering, domain dependency, or language dependency. One of the critical challenges faced the application of TCBR is responding to enormous requests from users in acceptable performance. Another challenge is the complexity of adapting Arabic language. The main contribution of this paper is proposing an enhanced version of SOPHIA-TCBR, which provides higher accuracy and better time performance. The proposed approach is evaluated in the domain of Arabic Islamic Jurisprudence (fiqh), which is a challenge case study with its large case-base and enormous number of users' requests (questions) daily. This task actually requires a certain smart system that can help in fulfilling people's needs for answers by applying the proposed approach in this domain and overcoming challenges related to the language syntax and semantics.
开发TCBR应用程序的成功方法之一是复杂信息分析(SOPHisticated Information Analysis, SOPHIA),其特点是它能够在没有先验知识工程、领域依赖或语言依赖的情况下工作。TCBR应用程序面临的关键挑战之一是在可接受的性能下响应来自用户的大量请求。另一个挑战是适应阿拉伯语的复杂性。本文的主要贡献是提出了一种增强版的SOPHIA-TCBR,它提供了更高的精度和更好的时间性能。建议的方法在阿拉伯伊斯兰法学(fiqh)领域进行评估,这是一个具有挑战性的案例研究,其庞大的案例基础和每天大量的用户请求(问题)。这项任务实际上需要一个特定的智能系统,该系统可以通过应用该领域提出的方法并克服与语言语法和语义相关的挑战来帮助满足人们对答案的需求。
{"title":"Enhanced Knowledge Discovery Approach in Textual Case Based Reasoning","authors":"Islam Elhalwany, Ammar Mohammed, K. Wassif, H. Hefny","doi":"10.1109/MICAI.2014.11","DOIUrl":"https://doi.org/10.1109/MICAI.2014.11","url":null,"abstract":"One of the successful approaches for developing TCBR applications is SOPHisticated Information Analysis (SOPHIA), which is distinguished by its ability to work without prior knowledge engineering, domain dependency, or language dependency. One of the critical challenges faced the application of TCBR is responding to enormous requests from users in acceptable performance. Another challenge is the complexity of adapting Arabic language. The main contribution of this paper is proposing an enhanced version of SOPHIA-TCBR, which provides higher accuracy and better time performance. The proposed approach is evaluated in the domain of Arabic Islamic Jurisprudence (fiqh), which is a challenge case study with its large case-base and enormous number of users' requests (questions) daily. This task actually requires a certain smart system that can help in fulfilling people's needs for answers by applying the proposed approach in this domain and overcoming challenges related to the language syntax and semantics.","PeriodicalId":189896,"journal":{"name":"2014 13th Mexican International Conference on Artificial Intelligence","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115241292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Martínez-Luna, Jesús-Manuel Olivares-Ceja, Eric Ortega Villanueva, A. Guzmán-Arenas
The Mexican Educative System collects thousands of records each year, related with student performance to support academic decisions. In this paper the data analysis, structures and different visual alternatives are used to discover student trajectories and mobility patterns. A model and a software tool have been developed and complemented with available visualization tools to enable visual pattern detection. The development has been tested with samples of data from several Mexican states and the results encourage the proposal to be used as an alternative to discover data patterns following a visual approach. The implementation of the proposal facilitates timely detection of student progress and bottlenecks for the teacher to provide students with supplementary materials and guides focused towards knowledge acquisition, skills and master concepts, techniques, tools management or production and development of innovative ideas.
{"title":"Mining Academic Data Using Visual Patterns","authors":"G. Martínez-Luna, Jesús-Manuel Olivares-Ceja, Eric Ortega Villanueva, A. Guzmán-Arenas","doi":"10.1109/MICAI.2014.20","DOIUrl":"https://doi.org/10.1109/MICAI.2014.20","url":null,"abstract":"The Mexican Educative System collects thousands of records each year, related with student performance to support academic decisions. In this paper the data analysis, structures and different visual alternatives are used to discover student trajectories and mobility patterns. A model and a software tool have been developed and complemented with available visualization tools to enable visual pattern detection. The development has been tested with samples of data from several Mexican states and the results encourage the proposal to be used as an alternative to discover data patterns following a visual approach. The implementation of the proposal facilitates timely detection of student progress and bottlenecks for the teacher to provide students with supplementary materials and guides focused towards knowledge acquisition, skills and master concepts, techniques, tools management or production and development of innovative ideas.","PeriodicalId":189896,"journal":{"name":"2014 13th Mexican International Conference on Artificial Intelligence","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121098798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Noel E. Rodríguez-Maya, J. Martínez-Carranza, J. Flores, Mario Graff
The Scholar Timetabling Problem consists of fixing a sequence of meetings between lecturers, classrooms and schedule to a set of groups and courses in a given period of time, satisfying a set of different constraints, where each course, lecturer, classroom, and time have special features, this problem is known to be NP-hard. Given the impossibility to solve this problem optimally, traditional and metaheuristic methods have been proposed to provide near-optimal solutions. This paper shows the implementation of a Genetic Algorithm (GA) using a real coding to solve the Scholar Timetabling Problem. A naive representation for chromosomes in a population-based heuristic search leads to high probabilities of violation of the problem constraints. To convert solutions that violate constraints (unfeasible solutions) into ones that do not (feasible solutions), we propose a repair mechanism. Based on the proposed mechanism, we present a possible solution to the Scholar Timetabling Problem applied to a real school (Instituto Tecnologico de Zitacuaro). Here we present experimental results based on different types of GA configurations to solve this problem and present the best GA configuration to solve the study case.
学者课程表问题包括在给定的时间段内将讲师、教室和课程表之间的会议序列固定到一组小组和课程中,满足一组不同的约束条件,其中每个课程、讲师、教室和时间都有特殊的特征,这个问题被称为np困难问题。鉴于不可能最优地解决这个问题,传统和元启发式方法被提出来提供接近最优的解决方案。本文介绍了一种采用实数编码的遗传算法来解决学者排课问题。在基于种群的启发式搜索中,对染色体的朴素表示导致违反问题约束的高概率。为了将违反约束的解决方案(不可行的解决方案)转换为不违反约束的解决方案(可行的解决方案),我们提出了一种修复机制。基于所提出的机制,我们提出了一个适用于实际学校(Instituto tecologico de Zitacuaro)的学者排课问题的可能解决方案。本文给出了基于不同类型遗传算法配置的实验结果,并给出了解决该问题的最佳遗传算法配置。
{"title":"Solving a Scholar Timetabling Problem Using a Genetic Algorithm - Study Case: Instituto Tecnologico De Zitacuaro","authors":"Noel E. Rodríguez-Maya, J. Martínez-Carranza, J. Flores, Mario Graff","doi":"10.1109/MICAI.2014.36","DOIUrl":"https://doi.org/10.1109/MICAI.2014.36","url":null,"abstract":"The Scholar Timetabling Problem consists of fixing a sequence of meetings between lecturers, classrooms and schedule to a set of groups and courses in a given period of time, satisfying a set of different constraints, where each course, lecturer, classroom, and time have special features, this problem is known to be NP-hard. Given the impossibility to solve this problem optimally, traditional and metaheuristic methods have been proposed to provide near-optimal solutions. This paper shows the implementation of a Genetic Algorithm (GA) using a real coding to solve the Scholar Timetabling Problem. A naive representation for chromosomes in a population-based heuristic search leads to high probabilities of violation of the problem constraints. To convert solutions that violate constraints (unfeasible solutions) into ones that do not (feasible solutions), we propose a repair mechanism. Based on the proposed mechanism, we present a possible solution to the Scholar Timetabling Problem applied to a real school (Instituto Tecnologico de Zitacuaro). Here we present experimental results based on different types of GA configurations to solve this problem and present the best GA configuration to solve the study case.","PeriodicalId":189896,"journal":{"name":"2014 13th Mexican International Conference on Artificial Intelligence","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124946663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rodolfo Alvarado-Cervantes, E. Riverón, Vladislav Khartchenko
We present a characterization and a numerical evaluation of an own semi-automatic color image segmentation method using generated synthetic images with its associated ground truth. The evaluation methodology was designed to assess the efficiency of the resulting color information achieved from color segmentation algorithms. By the use of ROC curves and its analysis, we obtained some particular characteristics of our segmentation method, such as the level of sensibility related to the threshold selection and to the appropriate number of pixels to have by the color sample needed by the algorithm.
{"title":"Characterization and Numerical Evaluation of a Color Image Segmentation Method","authors":"Rodolfo Alvarado-Cervantes, E. Riverón, Vladislav Khartchenko","doi":"10.1109/MICAI.2014.14","DOIUrl":"https://doi.org/10.1109/MICAI.2014.14","url":null,"abstract":"We present a characterization and a numerical evaluation of an own semi-automatic color image segmentation method using generated synthetic images with its associated ground truth. The evaluation methodology was designed to assess the efficiency of the resulting color information achieved from color segmentation algorithms. By the use of ROC curves and its analysis, we obtained some particular characteristics of our segmentation method, such as the level of sensibility related to the threshold selection and to the appropriate number of pixels to have by the color sample needed by the algorithm.","PeriodicalId":189896,"journal":{"name":"2014 13th Mexican International Conference on Artificial Intelligence","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128130703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rosa Liliana Gonzalez Arredondo, Romeo Sanchez, A. Berrones
One of the most popular algorithms in the field of domain independent planning is POP - partial order planning. POP considers a least commitment strategy to solve planning problems. Such strategy delays commitments during the planning phase until it is absolutely necessary. In consequence, the algorithm provides greater flexibility for solving planning problems, but with a higher cost in performance. POP-based techniques do not consider search states, instead, search nodes represent partial plans. Recent advances in planning on distance based heuristics and reach ability analysis have helped POP planners to solve more planning problems than before. Although such heuristic techniques have demonstrated to boost performance for POP algorithms, they still remain behind state space planners. We believe that this is mainly due to the partial order representation of the search nodes in POP. In this article, instead of proposing additional heuristics for POP, we enable POP to consider different areas of its search space. We think that the basic POP algorithm follows a greedy path in its search space suffering from local optima problems, from where it cannot recover. To this extent, we have augmented POP with a simulated annealing procedure, which considers worst solutions with certain probability. The augmented algorithm produces promising results in our empirical evaluation, returning up to 19% more solutions in the problems being considered.
{"title":"Introducing Simulated Annealing in Partial Order Planning","authors":"Rosa Liliana Gonzalez Arredondo, Romeo Sanchez, A. Berrones","doi":"10.1109/MICAI.2014.35","DOIUrl":"https://doi.org/10.1109/MICAI.2014.35","url":null,"abstract":"One of the most popular algorithms in the field of domain independent planning is POP - partial order planning. POP considers a least commitment strategy to solve planning problems. Such strategy delays commitments during the planning phase until it is absolutely necessary. In consequence, the algorithm provides greater flexibility for solving planning problems, but with a higher cost in performance. POP-based techniques do not consider search states, instead, search nodes represent partial plans. Recent advances in planning on distance based heuristics and reach ability analysis have helped POP planners to solve more planning problems than before. Although such heuristic techniques have demonstrated to boost performance for POP algorithms, they still remain behind state space planners. We believe that this is mainly due to the partial order representation of the search nodes in POP. In this article, instead of proposing additional heuristics for POP, we enable POP to consider different areas of its search space. We think that the basic POP algorithm follows a greedy path in its search space suffering from local optima problems, from where it cannot recover. To this extent, we have augmented POP with a simulated annealing procedure, which considers worst solutions with certain probability. The augmented algorithm produces promising results in our empirical evaluation, returning up to 19% more solutions in the problems being considered.","PeriodicalId":189896,"journal":{"name":"2014 13th Mexican International Conference on Artificial Intelligence","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134164407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Orlando Aguilar, Teresa E. M. Alarcón, Oscar Dalmau Cedeño, A. Zamudio
An automatic algorithm for measuring the thickness of carbon nanotubes is presented. The proposed algorithm is based on the computation of the thinning body of nanotubes. The main challenge for measuring the thickness of a nanotube is its isolation, due to the overlapping between nanotubes that typically appears in this type of images. In particular, an algorithm for solving the nanotube overlapping problem in previously-segmented images has also been elaborated. The performance of the algorithm is evaluated through a collection of segmented-images which are obtained from real carbon nanotubes using different types of electronic microscopes. The results of the algorithm are compared with measurements, a ground truth, provided by a nanotechnologist.
{"title":"Characterization of Nanotube Structures Using Digital-Segmented Images","authors":"Orlando Aguilar, Teresa E. M. Alarcón, Oscar Dalmau Cedeño, A. Zamudio","doi":"10.1109/MICAI.2014.15","DOIUrl":"https://doi.org/10.1109/MICAI.2014.15","url":null,"abstract":"An automatic algorithm for measuring the thickness of carbon nanotubes is presented. The proposed algorithm is based on the computation of the thinning body of nanotubes. The main challenge for measuring the thickness of a nanotube is its isolation, due to the overlapping between nanotubes that typically appears in this type of images. In particular, an algorithm for solving the nanotube overlapping problem in previously-segmented images has also been elaborated. The performance of the algorithm is evaluated through a collection of segmented-images which are obtained from real carbon nanotubes using different types of electronic microscopes. The results of the algorithm are compared with measurements, a ground truth, provided by a nanotechnologist.","PeriodicalId":189896,"journal":{"name":"2014 13th Mexican International Conference on Artificial Intelligence","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126244980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ask friends about a particular subject are a common situation in the daily life of a person. In virtual environments is a little more difficult. Virtual environments do not always allow face contact that helps people meet and share their experiences to answer questions about a particular subject. In this article are presented: a text mining method for obtaining the knowledge of people from forums and a recommender system that recommends people to ask them about a particular subject.
{"title":"Peer Recommendation Based on Text Mining Algorithm","authors":"S. Aciar","doi":"10.1109/MICAI.2014.12","DOIUrl":"https://doi.org/10.1109/MICAI.2014.12","url":null,"abstract":"Ask friends about a particular subject are a common situation in the daily life of a person. In virtual environments is a little more difficult. Virtual environments do not always allow face contact that helps people meet and share their experiences to answer questions about a particular subject. In this article are presented: a text mining method for obtaining the knowledge of people from forums and a recommender system that recommends people to ask them about a particular subject.","PeriodicalId":189896,"journal":{"name":"2014 13th Mexican International Conference on Artificial Intelligence","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130041989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}