Drug-drug interaction (DDI) study is an important aspect of therapy management and drug efficacy. DDI study investigates how drugs interact with each other and determine whether these interactions may lead to dire effects or nullify the therapeutic effects of each other. In this paper we model metabolic pathways of drugs that include the reaction effects between drugs and the related enzymes. By modeling the reaction effects, our model captures the degree of the effects of the interacting drugs. We introduce a novel methodology that combines semantics, ontology to model the concepts and interactions, and Answer Set Programming for temporal reasoning. We illustrate our method by inferring the effects of DDI among three drugs clozapine, olanzapine and fluvoxamine.
{"title":"Semantic Inference for Pharmacokinetic Drug-Drug Interactions","authors":"A. Moitra, R. Palla, L. Tari, M. Krishnamoorthy","doi":"10.1109/ICSC.2014.36","DOIUrl":"https://doi.org/10.1109/ICSC.2014.36","url":null,"abstract":"Drug-drug interaction (DDI) study is an important aspect of therapy management and drug efficacy. DDI study investigates how drugs interact with each other and determine whether these interactions may lead to dire effects or nullify the therapeutic effects of each other. In this paper we model metabolic pathways of drugs that include the reaction effects between drugs and the related enzymes. By modeling the reaction effects, our model captures the degree of the effects of the interacting drugs. We introduce a novel methodology that combines semantics, ontology to model the concepts and interactions, and Answer Set Programming for temporal reasoning. We illustrate our method by inferring the effects of DDI among three drugs clozapine, olanzapine and fluvoxamine.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122614082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hamid Mousavi, Deirdre Kerr, Markus R Iseli, C. Zaniolo
Ontologies are a vital component of most knowledge-based applications, including semantic web search, intelligent information integration, and natural language processing. In particular, we need effective tools for generating in-depth ontologies that achieve comprehensive converge of specific application domains of interest, while minimizing the time and cost of this process. Therefore we cannot rely on the manual or highly supervised approaches often used in the past, since they do not scale well. We instead propose a new approach that automatically generates domain-specific ontologies from a small corpus of documents using deep NLP-based text-mining. Starting from an initial small seed of domain concepts, our Onto Harvester system iteratively extracts ontological relations connecting existing concepts to other terms in the text, and adds strongly connected terms to the current ontology. As a result, Onto Harvester (i) remains focused on the application domain, (ii) is resistant to noise, and (iii) generates very comprehensive ontologies from modest-size document corpora. In fact, starting from a small seed, Onto Harvester produces ontologies that outperform both manually generated ontologies and ontologies generated by current techniques, even those that require very large well-focused data sets.
{"title":"Harvesting Domain Specific Ontologies from Text","authors":"Hamid Mousavi, Deirdre Kerr, Markus R Iseli, C. Zaniolo","doi":"10.1109/ICSC.2014.12","DOIUrl":"https://doi.org/10.1109/ICSC.2014.12","url":null,"abstract":"Ontologies are a vital component of most knowledge-based applications, including semantic web search, intelligent information integration, and natural language processing. In particular, we need effective tools for generating in-depth ontologies that achieve comprehensive converge of specific application domains of interest, while minimizing the time and cost of this process. Therefore we cannot rely on the manual or highly supervised approaches often used in the past, since they do not scale well. We instead propose a new approach that automatically generates domain-specific ontologies from a small corpus of documents using deep NLP-based text-mining. Starting from an initial small seed of domain concepts, our Onto Harvester system iteratively extracts ontological relations connecting existing concepts to other terms in the text, and adds strongly connected terms to the current ontology. As a result, Onto Harvester (i) remains focused on the application domain, (ii) is resistant to noise, and (iii) generates very comprehensive ontologies from modest-size document corpora. In fact, starting from a small seed, Onto Harvester produces ontologies that outperform both manually generated ontologies and ontologies generated by current techniques, even those that require very large well-focused data sets.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116915433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate user requirements regarding the interface design for a semantic multimedia search and retrieval based on a prototypical implementation of a search engine for multimedia content on the web. Thus, unlike existing image search engines and video search engines, we are interested in true multimedia content combining different media assets into multimedia documents like PowerPoint presentations and Flash files. In a user study with 20 participants, we conducted a formative evaluation based on the think-aloud method and semi-structured interviews in order to obtain requirements to a future web search engine for multimedia content. The interviews are complemented by a paper-and-pencil questionnaire to obtain quantitative information and present mockups demonstrating the user interface of a future multimedia search and retrieval engine.
{"title":"Requirements Elicitation Towards a Search Engine for Semantic Multimedia Content","authors":"Lydia Weiland, Felix Hanser, A. Scherp","doi":"10.1109/ICSC.2014.35","DOIUrl":"https://doi.org/10.1109/ICSC.2014.35","url":null,"abstract":"We investigate user requirements regarding the interface design for a semantic multimedia search and retrieval based on a prototypical implementation of a search engine for multimedia content on the web. Thus, unlike existing image search engines and video search engines, we are interested in true multimedia content combining different media assets into multimedia documents like PowerPoint presentations and Flash files. In a user study with 20 participants, we conducted a formative evaluation based on the think-aloud method and semi-structured interviews in order to obtain requirements to a future web search engine for multimedia content. The interviews are complemented by a paper-and-pencil questionnaire to obtain quantitative information and present mockups demonstrating the user interface of a future multimedia search and retrieval engine.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124851384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cultural heritage resources are huge and heterogeneous. They include highly structured, very unstructured, and semi-structured data or information obtained from both authorized and unauthorized sources and involving multimedia data including text, audio and video data. With the rapid development of the web, more and more cultural heritage organizations use digital methods to record, store and represent their arts and events. However, searching for information after they are stored is still considered a challenging task. The use of semantic web techniques is proposed here to make the data more structured so that the items in the cultural heritage domain can be fully represented and made easily assessable to the public as much as possible. This paper proposes a method to convert a traditional cultural heritage website into one that is well-designed and content-rich. The method includes an ontology model which could automatically adopt new class and instance as input by asserted and inferred models. It could also align local ontology and external online ontologies. Through the proposed method, this paper also discusses several urgent issues about automatic conversion of data, semantic search and user involvement.
{"title":"Using Aligned Ontology Model to Convert Cultural Heritage Resources into Semantic Web","authors":"Li Bing, Keith C. C. Chan, L. Carr","doi":"10.1109/ICSC.2014.39","DOIUrl":"https://doi.org/10.1109/ICSC.2014.39","url":null,"abstract":"Cultural heritage resources are huge and heterogeneous. They include highly structured, very unstructured, and semi-structured data or information obtained from both authorized and unauthorized sources and involving multimedia data including text, audio and video data. With the rapid development of the web, more and more cultural heritage organizations use digital methods to record, store and represent their arts and events. However, searching for information after they are stored is still considered a challenging task. The use of semantic web techniques is proposed here to make the data more structured so that the items in the cultural heritage domain can be fully represented and made easily assessable to the public as much as possible. This paper proposes a method to convert a traditional cultural heritage website into one that is well-designed and content-rich. The method includes an ontology model which could automatically adopt new class and instance as input by asserted and inferred models. It could also align local ontology and external online ontologies. Through the proposed method, this paper also discusses several urgent issues about automatic conversion of data, semantic search and user involvement.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125464513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ability to identify, process, and comprehend the essential elements of information associated with a given operational environment can be used to reason about how the actors within the environment can best respond. This is often referred to as "situation assessment," the end state of which is "situation awareness," which can be simply defined as "knowing what is going on around you." Taken together, these are important fields of study concerned with perception of the environment critical to decision-makers in many complex, dynamic domains, including aviation, military command and control, and emergency management. The primary goal of our research is to identify some of the main technical challenges associated with automated situation assessment, in general, and to propose an information processing methodology that meets those challenges, which we call Find-to-Forecast (F2F). The F2F framework supports accessing heterogeneous information (structured and unstructured), which is normalized into a standard RDF representation. Next, the F2F framework identifies mission-relevant information elements, filtering out irrelevant (or low priority) information, fusing the remaining relevant information. The next steps in the F2F process involve focusing operator attention on essential elements of mission information, and reasoning over fused, relevant information to forecast potential courses of action based on the evolving situation, changing data, and uncertain knowledge. This paper provides an overview of the overall F2F methodology, to provide context, followed by a more detailed consideration of the "focus" algorithm, which uses contextual semantics to evaluate the value of new information relative to an operator's situational understanding during evolving events.
{"title":"Find-to-Forecast Process: An Automated Methodology for Situation Assessment","authors":"K. Bimson, Ahmad Slim, G. Heileman","doi":"10.1109/ICSC.2014.60","DOIUrl":"https://doi.org/10.1109/ICSC.2014.60","url":null,"abstract":"The ability to identify, process, and comprehend the essential elements of information associated with a given operational environment can be used to reason about how the actors within the environment can best respond. This is often referred to as \"situation assessment,\" the end state of which is \"situation awareness,\" which can be simply defined as \"knowing what is going on around you.\" Taken together, these are important fields of study concerned with perception of the environment critical to decision-makers in many complex, dynamic domains, including aviation, military command and control, and emergency management. The primary goal of our research is to identify some of the main technical challenges associated with automated situation assessment, in general, and to propose an information processing methodology that meets those challenges, which we call Find-to-Forecast (F2F). The F2F framework supports accessing heterogeneous information (structured and unstructured), which is normalized into a standard RDF representation. Next, the F2F framework identifies mission-relevant information elements, filtering out irrelevant (or low priority) information, fusing the remaining relevant information. The next steps in the F2F process involve focusing operator attention on essential elements of mission information, and reasoning over fused, relevant information to forecast potential courses of action based on the evolving situation, changing data, and uncertain knowledge. This paper provides an overview of the overall F2F methodology, to provide context, followed by a more detailed consideration of the \"focus\" algorithm, which uses contextual semantics to evaluate the value of new information relative to an operator's situational understanding during evolving events.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125073606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, we have witnessed a deluge of multimedia data such as texts, images, and videos. However, the research of managing and retrieving these data efficiently is still in the development stage. The conventional tag-based searching approaches suffer from noisy or incomplete tag issues. As a result, the content-based multimedia data management framework has become increasingly popular. In this research direction, multimedia high-level semantic concept mining and retrieval is one of the fastest developing research topics requesting joint efforts from researchers in both data mining and multimedia domains. To solve this problem, one great challenge is to bridge the semantic gap which is the gap between high-level concepts and low-level features. Recently, positive inter-concept correlations have been utilized to capture the context of a concept to bridge the gap. However, negative correlations have rarely been studied because of the difficulty to mine and utilize them. In this paper, a concept mining and retrieval framework utilizing negative inter-concept correlations is proposed. Several research problems such as negative correlation selection, weight estimation, and score integration are addressed. Experimental results on TRECVID 2010 benchmark data set demonstrate that the proposed framework gives promising performance.
{"title":"Enhancing Multimedia Semantic Concept Mining and Retrieval by Incorporating Negative Correlations","authors":"Tao Meng, Yang Liu, M. Shyu, Yilin Yan, C. Shu","doi":"10.1109/ICSC.2014.30","DOIUrl":"https://doi.org/10.1109/ICSC.2014.30","url":null,"abstract":"In recent years, we have witnessed a deluge of multimedia data such as texts, images, and videos. However, the research of managing and retrieving these data efficiently is still in the development stage. The conventional tag-based searching approaches suffer from noisy or incomplete tag issues. As a result, the content-based multimedia data management framework has become increasingly popular. In this research direction, multimedia high-level semantic concept mining and retrieval is one of the fastest developing research topics requesting joint efforts from researchers in both data mining and multimedia domains. To solve this problem, one great challenge is to bridge the semantic gap which is the gap between high-level concepts and low-level features. Recently, positive inter-concept correlations have been utilized to capture the context of a concept to bridge the gap. However, negative correlations have rarely been studied because of the difficulty to mine and utilize them. In this paper, a concept mining and retrieval framework utilizing negative inter-concept correlations is proposed. Several research problems such as negative correlation selection, weight estimation, and score integration are addressed. Experimental results on TRECVID 2010 benchmark data set demonstrate that the proposed framework gives promising performance.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116634734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In many Semantic Web applications, having RDF predicates sorted by significance is of primarily importance to improve usability and performance. In this paper we focus on predicates available on DBpedia, the most important Semantic Web source of data counting 470 million english triples. Although there is plenty of work in literature dealing with ranking entities or RDF query results, none of them seem to specifically address the problem of computing predicate rank. We address the problem by associating to each DBPedia property (also known as predicates or attributes of RDF triples) a number of original features specifically designed to provide sort-by-importance quantitative measures, automatically computable from an online SPARQL endpoint or a RDF dataset. By computing those features on a number of entity properties, we created a learning set and tested the performance of a number of well-known learning-to-rank algorithms. Our first experimental results show that the approach is effective and fast.
{"title":"Computing On-the-Fly DBpedia Property Ranking","authors":"A. Dessì, M. Atzori","doi":"10.1109/ICSC.2014.55","DOIUrl":"https://doi.org/10.1109/ICSC.2014.55","url":null,"abstract":"In many Semantic Web applications, having RDF predicates sorted by significance is of primarily importance to improve usability and performance. In this paper we focus on predicates available on DBpedia, the most important Semantic Web source of data counting 470 million english triples. Although there is plenty of work in literature dealing with ranking entities or RDF query results, none of them seem to specifically address the problem of computing predicate rank. We address the problem by associating to each DBPedia property (also known as predicates or attributes of RDF triples) a number of original features specifically designed to provide sort-by-importance quantitative measures, automatically computable from an online SPARQL endpoint or a RDF dataset. By computing those features on a number of entity properties, we created a learning set and tested the performance of a number of well-known learning-to-rank algorithms. Our first experimental results show that the approach is effective and fast.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128457770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we represent a dynamic context-dependent weighting method for vector space model. A meaning is relatively decided by a context dynamically. A vector space model, including latent semantic indexing (LSI), etc. relatively measures correlations of each target thing that represents in each vector. However, the vectors of each target thing in almost method of the vector space models are static. It is important to weight each element of each vector by a context. Recently, it is necessary to understand a certain thing by not reading one data but summarizing massive data. Therefore, the vectors in the vector space model create from data set corresponding to represent a certain thing. That is, we should create vectors for the vector space model dynamically corresponding to a context and data distribution. The features of our method are a dynamic calculation of each element of vectors in a vector space model corresponding to a context. Our method reduces a vector dimension corresponding to context by context-depending weighting. Therefore, We can measure correlation with low calculation cost corresponding to context because of dimension deduction.
{"title":"Semantic Context-Dependent Weighting for Vector Space Model","authors":"T. Nakanishi","doi":"10.1109/ICSC.2014.49","DOIUrl":"https://doi.org/10.1109/ICSC.2014.49","url":null,"abstract":"In this paper, we represent a dynamic context-dependent weighting method for vector space model. A meaning is relatively decided by a context dynamically. A vector space model, including latent semantic indexing (LSI), etc. relatively measures correlations of each target thing that represents in each vector. However, the vectors of each target thing in almost method of the vector space models are static. It is important to weight each element of each vector by a context. Recently, it is necessary to understand a certain thing by not reading one data but summarizing massive data. Therefore, the vectors in the vector space model create from data set corresponding to represent a certain thing. That is, we should create vectors for the vector space model dynamically corresponding to a context and data distribution. The features of our method are a dynamic calculation of each element of vectors in a vector space model corresponding to a context. Our method reduces a vector dimension corresponding to context by context-depending weighting. Therefore, We can measure correlation with low calculation cost corresponding to context because of dimension deduction.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126854725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-16DOI: 10.1142/S1793351X14400078
D. Popolov, Joseph R. Barr
This paper discusses principles for the design of natural language processing (NLP) systems to automatically extract of data from doctor's notes, laboratory results and other medical documents in free-form text. We argue that rather than searching for 'atom units of meaning' in the text and then trying to generalize them into a broader set of documents through increasingly complicated system of rules, an NLP practitioner should take concepts as a whole as a meaningful unit of text. This simplifies the rules and makes NLP system easier to maintain and adapt. The departure point is purely practical, however a deeper investigation of typical problems with the implementation of such systems leads us to a discussion of broader theoretical principles underlying the NLP practices.
{"title":"\"Units of Meaning\" in Medical Documents: Natural Language Processing Perspective","authors":"D. Popolov, Joseph R. Barr","doi":"10.1142/S1793351X14400078","DOIUrl":"https://doi.org/10.1142/S1793351X14400078","url":null,"abstract":"This paper discusses principles for the design of natural language processing (NLP) systems to automatically extract of data from doctor's notes, laboratory results and other medical documents in free-form text. We argue that rather than searching for 'atom units of meaning' in the text and then trying to generalize them into a broader set of documents through increasingly complicated system of rules, an NLP practitioner should take concepts as a whole as a meaningful unit of text. This simplifies the rules and makes NLP system easier to maintain and adapt. The departure point is purely practical, however a deeper investigation of typical problems with the implementation of such systems leads us to a discussion of broader theoretical principles underlying the NLP practices.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115677365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Amato, Aniello De Santo, V. Moscato, Fabio Persia, A. Picariello
Detection of human behavior in On-line Social Networks (OSNs) has become more and more important for a wide range of applications, such as security, marketing, parent controls and so on, opening a wide range of novel research areas, which have not been fully addressed yet. In this paper, we present a two-stage method for anomaly detection in humans' behavior while they are using a social network. First, we use Markov chains to automatically learn from the social network graph a number of models of human behaviors (normal behaviors), the second stage applies an activity detection framework based on the concept of possible words to detect all unexplained activities with respect to the normal behaviors. Some preliminary experiments using Facebook data show the approach efficiency and effectiveness.
在线社交网络(online Social Networks, OSNs)中人类行为的检测在安全、营销、家长控制等广泛应用中变得越来越重要,开辟了许多尚未完全解决的新研究领域。在本文中,我们提出了一种两阶段的方法来检测人类使用社交网络时的行为异常。首先,我们使用马尔可夫链从社交网络图中自动学习人类行为(正常行为)的一些模型,第二阶段应用基于可能词概念的活动检测框架来检测相对于正常行为的所有未解释的活动。一些使用Facebook数据的初步实验显示了该方法的效率和有效性。
{"title":"Detecting Unexplained Human Behaviors in Social Networks","authors":"F. Amato, Aniello De Santo, V. Moscato, Fabio Persia, A. Picariello","doi":"10.1109/ICSC.2014.21","DOIUrl":"https://doi.org/10.1109/ICSC.2014.21","url":null,"abstract":"Detection of human behavior in On-line Social Networks (OSNs) has become more and more important for a wide range of applications, such as security, marketing, parent controls and so on, opening a wide range of novel research areas, which have not been fully addressed yet. In this paper, we present a two-stage method for anomaly detection in humans' behavior while they are using a social network. First, we use Markov chains to automatically learn from the social network graph a number of models of human behaviors (normal behaviors), the second stage applies an activity detection framework based on the concept of possible words to detect all unexplained activities with respect to the normal behaviors. Some preliminary experiments using Facebook data show the approach efficiency and effectiveness.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132316845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}