Pub Date : 2011-12-01DOI: 10.1109/ISDA.2011.6121740
H. J. Jung, J. Aggarwal
The volatile and stochastic characteristics of securities make it challenging to predict even tomorrow's stock prices. Better estimation of stock trends can be accomplished using both the significant and well-constructed set of features. Moreover, the prediction capability will gain momentum as we build the right model to capture unobservable attributes of the varying tendencies. In this paper, we propose a Binary Stock Event Model (BSEM) and generate features sets based on it in order to better predict the future trends of the stock market. We apply two learning models such as a Bayesian Naive Classifier and a Support Vector Machine to prove the efficiency of our approach in the aspects of prediction accuracy and computational cost. Our experiments demonstrate that the prediction accuracies are around 70–80% in one day predictions. In addition, our back-testing proves that our trading model outperforms well-known technical indicator based trading strategies with regards to cumulative returns by 30%–100%. As a result, this paper suggests that our BSEM based stock forecasting shows its excellence with regards to prediction accuracy and cumulative returns in a real world dataset.
{"title":"A Binary Stock Event Model for stock trends forecasting: Forecasting stock trends via a simple and accurate approach with machine learning","authors":"H. J. Jung, J. Aggarwal","doi":"10.1109/ISDA.2011.6121740","DOIUrl":"https://doi.org/10.1109/ISDA.2011.6121740","url":null,"abstract":"The volatile and stochastic characteristics of securities make it challenging to predict even tomorrow's stock prices. Better estimation of stock trends can be accomplished using both the significant and well-constructed set of features. Moreover, the prediction capability will gain momentum as we build the right model to capture unobservable attributes of the varying tendencies. In this paper, we propose a Binary Stock Event Model (BSEM) and generate features sets based on it in order to better predict the future trends of the stock market. We apply two learning models such as a Bayesian Naive Classifier and a Support Vector Machine to prove the efficiency of our approach in the aspects of prediction accuracy and computational cost. Our experiments demonstrate that the prediction accuracies are around 70–80% in one day predictions. In addition, our back-testing proves that our trading model outperforms well-known technical indicator based trading strategies with regards to cumulative returns by 30%–100%. As a result, this paper suggests that our BSEM based stock forecasting shows its excellence with regards to prediction accuracy and cumulative returns in a real world dataset.","PeriodicalId":433207,"journal":{"name":"2011 11th International Conference on Intelligent Systems Design and Applications","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133150755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-22DOI: 10.1109/ISDA.2011.6121744
Rizwan Ahmed Khan, Alexandre Meyer, H. Konik, S. Bouakaz
This paper proposes a novel framework for universal facial expression recognition. The framework is based on two sets of features extracted from the face image: entropy and brightness. First, saliency maps are obtained by state-of-the-art saliency detection algorithm i.e. “frequency-tuned salient region detection”. Then only localized salient facial regions from saliency maps are processed to extract entropy and brightness features. To validate the performance of saliency detection algorithm against human visual system, we have performed a visual experiment. Eye movements of 15 subjects were recorded with an eye-tracker in free viewing conditions as they watch a collection of 54 videos selected from Cohn-Kanade facial expression database. Results of the visual experiment provided the evidence that obtained saliency maps conforms well with human fixations data. Finally, evidence of the proposed framework's performance is exhibited through satisfactory classification results on Cohn-Kanade database.
{"title":"Facial expression recognition using entropy and brightness features","authors":"Rizwan Ahmed Khan, Alexandre Meyer, H. Konik, S. Bouakaz","doi":"10.1109/ISDA.2011.6121744","DOIUrl":"https://doi.org/10.1109/ISDA.2011.6121744","url":null,"abstract":"This paper proposes a novel framework for universal facial expression recognition. The framework is based on two sets of features extracted from the face image: entropy and brightness. First, saliency maps are obtained by state-of-the-art saliency detection algorithm i.e. “frequency-tuned salient region detection”. Then only localized salient facial regions from saliency maps are processed to extract entropy and brightness features. To validate the performance of saliency detection algorithm against human visual system, we have performed a visual experiment. Eye movements of 15 subjects were recorded with an eye-tracker in free viewing conditions as they watch a collection of 54 videos selected from Cohn-Kanade facial expression database. Results of the visual experiment provided the evidence that obtained saliency maps conforms well with human fixations data. Finally, evidence of the proposed framework's performance is exhibited through satisfactory classification results on Cohn-Kanade database.","PeriodicalId":433207,"journal":{"name":"2011 11th International Conference on Intelligent Systems Design and Applications","volume":"2 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120808562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ISDA.2011.6121850
Vijender Chaitankar, P. Ghosh, M. Elasri, E. Perkins
Most of the popular approaches towards gene regulatory networks inference e.g., Dynamic Bayesian Networks, Probabilistic Boolean Networks etc. are computationally complex and can only be used to infer small networks. While high-throughput experimental methods to monitor gene expression provide data for thousands of genes, these methods cannot fully utilize the entire spectrum of generated data. With the advent of information theoretic approaches in the last decade, the inference of larger regulatory networks from high throughput microarray data has become possible. Not all information theoretic approaches are scalable though; only methods that infer networks considering pair-wise interactions between genes such as, relevance networks, ARACNE and CLR to name a few, can be scaled upto genome-level inference. ARACNE and CLR attempt to improve the inference accuracy by pruning false edges, and do not bring in newer true edges. REVEAL is another information theoretic approach, which considers mutual information between multiple genes. As it goes beyond pair wise interactions, this approach was not scalable and could only infer small networks. In this paper, we propose two algorithms to improve the scalability of REVEAL by utilizing a transcription factor list (that can be predicted from the gene sequences) as prior knowledge and implementing time lags to further reduce the potential transcription factors that may regulate a gene. Our proposed S-REVEAL algorithms can infer larger networks with higher accuracy than the popular CLR algorithm.
{"title":"sREVEAL: Scalable extensions of REVEAL towards regulatory network inference","authors":"Vijender Chaitankar, P. Ghosh, M. Elasri, E. Perkins","doi":"10.1109/ISDA.2011.6121850","DOIUrl":"https://doi.org/10.1109/ISDA.2011.6121850","url":null,"abstract":"Most of the popular approaches towards gene regulatory networks inference e.g., Dynamic Bayesian Networks, Probabilistic Boolean Networks etc. are computationally complex and can only be used to infer small networks. While high-throughput experimental methods to monitor gene expression provide data for thousands of genes, these methods cannot fully utilize the entire spectrum of generated data. With the advent of information theoretic approaches in the last decade, the inference of larger regulatory networks from high throughput microarray data has become possible. Not all information theoretic approaches are scalable though; only methods that infer networks considering pair-wise interactions between genes such as, relevance networks, ARACNE and CLR to name a few, can be scaled upto genome-level inference. ARACNE and CLR attempt to improve the inference accuracy by pruning false edges, and do not bring in newer true edges. REVEAL is another information theoretic approach, which considers mutual information between multiple genes. As it goes beyond pair wise interactions, this approach was not scalable and could only infer small networks. In this paper, we propose two algorithms to improve the scalability of REVEAL by utilizing a transcription factor list (that can be predicted from the gene sequences) as prior knowledge and implementing time lags to further reduce the potential transcription factors that may regulate a gene. Our proposed S-REVEAL algorithms can infer larger networks with higher accuracy than the popular CLR algorithm.","PeriodicalId":433207,"journal":{"name":"2011 11th International Conference on Intelligent Systems Design and Applications","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116907403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ISDA.2011.6121701
Javier Cózar, L. D. L. Ossa, J. A. Gamez
The COR methodology allows the learning of Linguistic Fuzzy Rule-Based Systems by considering cooperation among rules. In order to do that, COR firstly finds the set of candidate fuzzy rules that can be fired by the examples in the training set, and then uses a search algorithm to find the final set of rules. In the algorithms proposed so far, all candidate rules have the same number of antecedents, which is the number of input variables. However, these rules could be too specific, and rules more generic are not considered. In this paper we study the effect of considering all posible rules, regardeless of their number of antecedents. Experiments show that the rule bases obtained use simpler rules, and the results for the error of prediction improve upon those obtained by using classical COR methods.
{"title":"Learning heterogeneus cooperative linguistic fuzzy rules using local search: Enhancing the COR search space","authors":"Javier Cózar, L. D. L. Ossa, J. A. Gamez","doi":"10.1109/ISDA.2011.6121701","DOIUrl":"https://doi.org/10.1109/ISDA.2011.6121701","url":null,"abstract":"The COR methodology allows the learning of Linguistic Fuzzy Rule-Based Systems by considering cooperation among rules. In order to do that, COR firstly finds the set of candidate fuzzy rules that can be fired by the examples in the training set, and then uses a search algorithm to find the final set of rules. In the algorithms proposed so far, all candidate rules have the same number of antecedents, which is the number of input variables. However, these rules could be too specific, and rules more generic are not considered. In this paper we study the effect of considering all posible rules, regardeless of their number of antecedents. Experiments show that the rule bases obtained use simpler rules, and the results for the error of prediction improve upon those obtained by using classical COR methods.","PeriodicalId":433207,"journal":{"name":"2011 11th International Conference on Intelligent Systems Design and Applications","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127184444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ISDA.2011.6121852
J. M. Cadenas, M. C. Garrido, A. Martínez, Raquel Martínez
When individual classifiers are combined appropriately, we usually obtain a better performance in terms of classification precision. Classifier ensembles are the result of combining several individual classifiers. In this work we propose and compare various consensus based combination methods to obtain the final decision of the ensemble based on fuzzy decision trees in order to improve results. We make a comparative study with several datasets to show the efficiency of the various combination methods.
{"title":"Consensus operators for decision making in Fuzzy Random Forest ensemble","authors":"J. M. Cadenas, M. C. Garrido, A. Martínez, Raquel Martínez","doi":"10.1109/ISDA.2011.6121852","DOIUrl":"https://doi.org/10.1109/ISDA.2011.6121852","url":null,"abstract":"When individual classifiers are combined appropriately, we usually obtain a better performance in terms of classification precision. Classifier ensembles are the result of combining several individual classifiers. In this work we propose and compare various consensus based combination methods to obtain the final decision of the ensemble based on fuzzy decision trees in order to improve results. We make a comparative study with several datasets to show the efficiency of the various combination methods.","PeriodicalId":433207,"journal":{"name":"2011 11th International Conference on Intelligent Systems Design and Applications","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122566528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ISDA.2011.6121623
Jose Estevez, Pedro A. Toledo, S. Alayón
This paper researches a collaborative strategy between an XCS learning classifier system (LCS) and a relational learning (RL) agent. The problem here is to learn a relational policy for a stochastic markovian decision process. In the proposed method the XCS agent is used to improve the performance of the RL agent by filtering the samples used at the induction step. This research shows that in these conditions, one of the main benefits of using the XCS algorithm comes from selecting the examples for relational learning using an estimation for the accuracy of the predicted value at each state-action pair. This kind of transfer learning is important because the characteristics of both agents are complementary: the RL agent incrementally induces a high level description of a policy, while the LCS agent offers adaptation to changes in the environment.
{"title":"Guiding a relational learning agent with a learning classifier system","authors":"Jose Estevez, Pedro A. Toledo, S. Alayón","doi":"10.1109/ISDA.2011.6121623","DOIUrl":"https://doi.org/10.1109/ISDA.2011.6121623","url":null,"abstract":"This paper researches a collaborative strategy between an XCS learning classifier system (LCS) and a relational learning (RL) agent. The problem here is to learn a relational policy for a stochastic markovian decision process. In the proposed method the XCS agent is used to improve the performance of the RL agent by filtering the samples used at the induction step. This research shows that in these conditions, one of the main benefits of using the XCS algorithm comes from selecting the examples for relational learning using an estimation for the accuracy of the predicted value at each state-action pair. This kind of transfer learning is important because the characteristics of both agents are complementary: the RL agent incrementally induces a high level description of a policy, while the LCS agent offers adaptation to changes in the environment.","PeriodicalId":433207,"journal":{"name":"2011 11th International Conference on Intelligent Systems Design and Applications","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121864932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ISDA.2011.6121696
Álvaro Tejeda-Lorente, C. Porcel, María Ángeles Martínez, A. G. López-Herrera, E. Herrera-Viedma
In the recent times the amount of information coming overwhelms us, and because of it we have serious problems to access to relevant information, that is, we suffer information overload problems. Recommender systems have been applied successfully to avoid the information overload in different scopes, but the number of electronic resources daily generated keeps growing and the problem still remain. Therefore, we find a persistent problem of information overload. In this paper we propose an improved recommender system to avoid the persistent information overload found in a University Digital Library. The idea is to include a memory to remember selected resources but not recommended to the user, and in such a way, the system could incorporate them in future recommendations to complete the set of filtered resources, for example, if there are a few resources to be recommended or if the user wishes output obtained by combination of resources selected in different recommendation rounds.
{"title":"Using memory to reduce the information overload in a university digital library","authors":"Álvaro Tejeda-Lorente, C. Porcel, María Ángeles Martínez, A. G. López-Herrera, E. Herrera-Viedma","doi":"10.1109/ISDA.2011.6121696","DOIUrl":"https://doi.org/10.1109/ISDA.2011.6121696","url":null,"abstract":"In the recent times the amount of information coming overwhelms us, and because of it we have serious problems to access to relevant information, that is, we suffer information overload problems. Recommender systems have been applied successfully to avoid the information overload in different scopes, but the number of electronic resources daily generated keeps growing and the problem still remain. Therefore, we find a persistent problem of information overload. In this paper we propose an improved recommender system to avoid the persistent information overload found in a University Digital Library. The idea is to include a memory to remember selected resources but not recommended to the user, and in such a way, the system could incorporate them in future recommendations to complete the set of filtered resources, for example, if there are a few resources to be recommended or if the user wishes output obtained by combination of resources selected in different recommendation rounds.","PeriodicalId":433207,"journal":{"name":"2011 11th International Conference on Intelligent Systems Design and Applications","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122165336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ISDA.2011.6121791
D. Manjarrés, J. Ser, S. Gil-Lopez, M. Vecchio, I. Landa-Torres, Roberto López-Valcarce
In many applications based on Wireless Sensor Networks (WSNs) with static sensor nodes, the availability of accurate location information of the network nodes may become essential. The node localization problem is to estimate all the unknown node positions, based on noisy pairwise distance measurements of nodes within range of each other. Maximum Likelihood (ML) estimation results in a non-convex problem, which is further complicated by the fact that sufficient conditions for the solution to be unique are not easily identified, especially when dealing with sparse networks. Thereby, different node configurations can provide equally good fitness results, with only one of them corresponding to the real network geometry. This paper presents a novel soft-computing localization technique based on hybridizing a Harmony Search (HS) algorithm with a local search procedure whose aim is to identify the localizability issues and mitigate its effects during the iterative process. Moreover, certain connectivity-based geometrical constraints are exploited to further reduce the areas where each sensor node can be located. Simulation results show that our approach outperforms a previously proposed meta-heuristic localization scheme based on the Simulated Annealing (SA) algorithm, in terms of both localization error and computational cost.
{"title":"On the application of a hybrid Harmony Search algorithm to node localization in anchor-based Wireless Sensor Networks","authors":"D. Manjarrés, J. Ser, S. Gil-Lopez, M. Vecchio, I. Landa-Torres, Roberto López-Valcarce","doi":"10.1109/ISDA.2011.6121791","DOIUrl":"https://doi.org/10.1109/ISDA.2011.6121791","url":null,"abstract":"In many applications based on Wireless Sensor Networks (WSNs) with static sensor nodes, the availability of accurate location information of the network nodes may become essential. The node localization problem is to estimate all the unknown node positions, based on noisy pairwise distance measurements of nodes within range of each other. Maximum Likelihood (ML) estimation results in a non-convex problem, which is further complicated by the fact that sufficient conditions for the solution to be unique are not easily identified, especially when dealing with sparse networks. Thereby, different node configurations can provide equally good fitness results, with only one of them corresponding to the real network geometry. This paper presents a novel soft-computing localization technique based on hybridizing a Harmony Search (HS) algorithm with a local search procedure whose aim is to identify the localizability issues and mitigate its effects during the iterative process. Moreover, certain connectivity-based geometrical constraints are exploited to further reduce the areas where each sensor node can be located. Simulation results show that our approach outperforms a previously proposed meta-heuristic localization scheme based on the Simulated Annealing (SA) algorithm, in terms of both localization error and computational cost.","PeriodicalId":433207,"journal":{"name":"2011 11th International Conference on Intelligent Systems Design and Applications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117248708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ISDA.2011.6121694
Alejandro Sobrino, J. A. Olivas, C. Puente
The aim of this paper is to approach causal questions in a medical domain. Causal questions par excellence are what, how and why-questions. The ‘pyramid of questions’ shows this. At the top, why-questions are the prototype of causal questions. Usually why-questions are related to scientific explanations. Although cover law explanation is characteristically of physical sciences, it is less common in biological or medical knowledge. In medicine, laws applied to all cases are rare. It seems that doctors express their knowledge using mechanisms instead of natural laws. In this paper we will approach causal questions with the aim of: (1) answering what-questions as identifying the cause of an effect; (2) answering how-questions as selecting an appropriate part of a mechanism that relates pairs of cause-effect (3) answering why-questions as identifying ultimate causes in the answers of how-questions. In this task, we hypothesize that why-questions are related to scientific explanations in a negative and a positive note: (i) as previously said, scientific explanations in biology are based on mechanisms instead of natural laws; (ii) scientific explanations are generally concerned with deepening, providing explanations as detailed as possible. Thus, we conjecture that answers to why-questions have to find the ultimate causes in a mechanism and link them to the prior cause summarizing the intermediate nodes in order to provide a comprehensible answer. The Mackie´s INUS causality offers a theoretical support for this solution.
{"title":"Mining answers for causal questions in a medical example","authors":"Alejandro Sobrino, J. A. Olivas, C. Puente","doi":"10.1109/ISDA.2011.6121694","DOIUrl":"https://doi.org/10.1109/ISDA.2011.6121694","url":null,"abstract":"The aim of this paper is to approach causal questions in a medical domain. Causal questions par excellence are what, how and why-questions. The ‘pyramid of questions’ shows this. At the top, why-questions are the prototype of causal questions. Usually why-questions are related to scientific explanations. Although cover law explanation is characteristically of physical sciences, it is less common in biological or medical knowledge. In medicine, laws applied to all cases are rare. It seems that doctors express their knowledge using mechanisms instead of natural laws. In this paper we will approach causal questions with the aim of: (1) answering what-questions as identifying the cause of an effect; (2) answering how-questions as selecting an appropriate part of a mechanism that relates pairs of cause-effect (3) answering why-questions as identifying ultimate causes in the answers of how-questions. In this task, we hypothesize that why-questions are related to scientific explanations in a negative and a positive note: (i) as previously said, scientific explanations in biology are based on mechanisms instead of natural laws; (ii) scientific explanations are generally concerned with deepening, providing explanations as detailed as possible. Thus, we conjecture that answers to why-questions have to find the ultimate causes in a mechanism and link them to the prior cause summarizing the intermediate nodes in order to provide a comprehensible answer. The Mackie´s INUS causality offers a theoretical support for this solution.","PeriodicalId":433207,"journal":{"name":"2011 11th International Conference on Intelligent Systems Design and Applications","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129273709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ISDA.2011.6121639
Sergio Misó, Miguel J. Hornos, María Luisa Rodríguez
Nowadays, Internet is a complex information space in which information is spread among many different data sources. Public and private institutions have already worked on some projects to gather geolocalized information about public places. However, no work has been carried out on information repositories about the activities developed in those places. Thus, due to the lack of clear reference information, citizens do not usually know how to take advantage of offers provided by their cultural environment. This paper presents a system for mobile devices that tries to solve this problem. Personalized information on events that take place in the user's town will be shown, taking into account her/his interests and preferences. Besides, the user interaction with the application will enable the system to evolve and improve the user model so that more accurate personalized data can be provided at a specific time.
{"title":"Adaptive geolocated cultural information system for mobile devices","authors":"Sergio Misó, Miguel J. Hornos, María Luisa Rodríguez","doi":"10.1109/ISDA.2011.6121639","DOIUrl":"https://doi.org/10.1109/ISDA.2011.6121639","url":null,"abstract":"Nowadays, Internet is a complex information space in which information is spread among many different data sources. Public and private institutions have already worked on some projects to gather geolocalized information about public places. However, no work has been carried out on information repositories about the activities developed in those places. Thus, due to the lack of clear reference information, citizens do not usually know how to take advantage of offers provided by their cultural environment. This paper presents a system for mobile devices that tries to solve this problem. Personalized information on events that take place in the user's town will be shown, taking into account her/his interests and preferences. Besides, the user interaction with the application will enable the system to evolve and improve the user model so that more accurate personalized data can be provided at a specific time.","PeriodicalId":433207,"journal":{"name":"2011 11th International Conference on Intelligent Systems Design and Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124730064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}