Ontology is essential in the formalization of domain knowledge for effective human-computer interactions (i.e., expert-finding). Many researchers have proposed approaches to measure the similarity between concepts by accessing fuzzy domain ontology. However, engineering of the construction of domain ontologies turns out to be labor intensive and tedious. In this paper, we propose an approach to mine domain concepts from Wikipedia Category Network, and to generate the fuzzy relation based on a concept vector extraction method to measure the relatedness between a single term and a concept. Our methodology can conceptualize domain knowledge by mining Wikipedia Category Network. An empirical experiment is conducted to evaluate the robustness by using TREC dataset. Experiment results show the constructed fuzzy domain ontology derived by proposed approach can discover robust fuzzy domain ontology with satisfactory accuracy in information retrieval tasks.
{"title":"Mining Fuzzy Domain Ontology Based on Concept Vector from Wikipedia Category Network","authors":"Cheng-Yu Lu, Shou-Wei Ho, Jen-Ming Chung, Fu-Yuan Hsu, Hahn-Ming Lee, Jan-Ming Ho","doi":"10.1109/WI-IAT.2011.140","DOIUrl":"https://doi.org/10.1109/WI-IAT.2011.140","url":null,"abstract":"Ontology is essential in the formalization of domain knowledge for effective human-computer interactions (i.e., expert-finding). Many researchers have proposed approaches to measure the similarity between concepts by accessing fuzzy domain ontology. However, engineering of the construction of domain ontologies turns out to be labor intensive and tedious. In this paper, we propose an approach to mine domain concepts from Wikipedia Category Network, and to generate the fuzzy relation based on a concept vector extraction method to measure the relatedness between a single term and a concept. Our methodology can conceptualize domain knowledge by mining Wikipedia Category Network. An empirical experiment is conducted to evaluate the robustness by using TREC dataset. Experiment results show the constructed fuzzy domain ontology derived by proposed approach can discover robust fuzzy domain ontology with satisfactory accuracy in information retrieval tasks.","PeriodicalId":128421,"journal":{"name":"2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128430015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Consider a network where nodes are websites and the weight of a link that connects two nodes corresponds to the average number of users that visits both of the two websites over longer timescales. Such user-driven Web network is not only invaluable for understanding how crowds' interests collectively spread on the Web, but also useful for applications such as advertising or search. In this paper, we manage to construct such a network by 'putting together' pieces of information publicly available from the popular analytics websites. Our contributions are threefold. First, we design a crawler and a normalization methodology that enable us to construct a user-driven Web network based on limited publicly-available information, and validate the high accuracy of our approach. Second, we evaluate the unique properties of our network, and demonstrate that it exhibits small-world, seed-free, and scale-free phenomena. Finally, we build an application, website selector, on top of the user-driven network. The core concept utilized in the website selector is that by exploiting the knowledge that a number of websites share a number of common users, an advertiser might prefer displaying his ads only on a subset of these websites to optimize the budget allocation, and in turn increase the visibility of his ads on other websites. Our websites elector system is tailored for ad commissioners and it could be easily embedded in their ad selection algorithms.
{"title":"Understanding Crowds' Migration on the Web","authors":"Yong Wang, Komal Pal, A. Kuzmanovic","doi":"10.1109/WI-IAT.2011.40","DOIUrl":"https://doi.org/10.1109/WI-IAT.2011.40","url":null,"abstract":"Consider a network where nodes are websites and the weight of a link that connects two nodes corresponds to the average number of users that visits both of the two websites over longer timescales. Such user-driven Web network is not only invaluable for understanding how crowds' interests collectively spread on the Web, but also useful for applications such as advertising or search. In this paper, we manage to construct such a network by 'putting together' pieces of information publicly available from the popular analytics websites. Our contributions are threefold. First, we design a crawler and a normalization methodology that enable us to construct a user-driven Web network based on limited publicly-available information, and validate the high accuracy of our approach. Second, we evaluate the unique properties of our network, and demonstrate that it exhibits small-world, seed-free, and scale-free phenomena. Finally, we build an application, website selector, on top of the user-driven network. The core concept utilized in the website selector is that by exploiting the knowledge that a number of websites share a number of common users, an advertiser might prefer displaying his ads only on a subset of these websites to optimize the budget allocation, and in turn increase the visibility of his ads on other websites. Our websites elector system is tailored for ad commissioners and it could be easily embedded in their ad selection algorithms.","PeriodicalId":128421,"journal":{"name":"2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130166664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Even though Web image search queries are often ambiguous, traditional search engines retrieve and present results solely based on relevance ranking, where only the most common and popular interpretations of the query are considered. Rather than assuming that all users are interested in the most common meaning of the query, a more sensible approach may be to produce a diversified set of images that cover the various aspects of the query, under the expectation that at least one of these interpretations will match the searcher's needs. However, such a promotion of diversity in the search results has the side-effect of decreasing the precision of the most common sense. In this paper, we evaluate this trade-off in the context of a method for explicitly diversifying image search results via concept-based query expansion using Wikipedia. Experiments with controlling the degree of diversification illustrate this balance between diversity and precision for both ambiguous and specific queries. Our ultimate goal of this research is to propose an automatic method for tuning the diversification parameter based on degree of ambiguity of the original query.
{"title":"Evaluating the Trade-Offs between Diversity and Precision for Web Image Search Using Concept-Based Query Expansion","authors":"Enamul Hoque, O. Hoeber, Minglun Gong","doi":"10.1109/WI-IAT.2011.11","DOIUrl":"https://doi.org/10.1109/WI-IAT.2011.11","url":null,"abstract":"Even though Web image search queries are often ambiguous, traditional search engines retrieve and present results solely based on relevance ranking, where only the most common and popular interpretations of the query are considered. Rather than assuming that all users are interested in the most common meaning of the query, a more sensible approach may be to produce a diversified set of images that cover the various aspects of the query, under the expectation that at least one of these interpretations will match the searcher's needs. However, such a promotion of diversity in the search results has the side-effect of decreasing the precision of the most common sense. In this paper, we evaluate this trade-off in the context of a method for explicitly diversifying image search results via concept-based query expansion using Wikipedia. Experiments with controlling the degree of diversification illustrate this balance between diversity and precision for both ambiguous and specific queries. Our ultimate goal of this research is to propose an automatic method for tuning the diversification parameter based on degree of ambiguity of the original query.","PeriodicalId":128421,"journal":{"name":"2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130780953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Keeney, Aidan Boran, Ivan Bedini, C. Matheus, P. Patel-Schneider
Integrating and relating heterogeneous data using inference is one of the cornerstones of semantic technologies and there are a variety of ways in which this may be achieved. Cross source relationships can be automatically translated or inferred using the axioms of RDFS/OWL, via user generated rules, or as the result of SPARQL query result transformations. For a given problem it is not always obvious which approach (or combination of approaches) will be the most effective and few guidelines exist for making this choice. This paper discusses these three approaches and demonstrates them using an "acquaintance" relationship drawn from data residing in common RDF information sources such as FOAF and DBLP data stores. The implementation of each approach is described along with practical considerations for their use. Quantitative and qualitative evaluation results of each approach are presented and the paper concludes with initial suggestions for guiding principles to help in selecting an appropriate approach for integrating heterogeneous semantic data sources.
{"title":"Approaches to Relating and Integrating Semantic Data from Heterogeneous Sources","authors":"J. Keeney, Aidan Boran, Ivan Bedini, C. Matheus, P. Patel-Schneider","doi":"10.1109/WI-IAT.2011.129","DOIUrl":"https://doi.org/10.1109/WI-IAT.2011.129","url":null,"abstract":"Integrating and relating heterogeneous data using inference is one of the cornerstones of semantic technologies and there are a variety of ways in which this may be achieved. Cross source relationships can be automatically translated or inferred using the axioms of RDFS/OWL, via user generated rules, or as the result of SPARQL query result transformations. For a given problem it is not always obvious which approach (or combination of approaches) will be the most effective and few guidelines exist for making this choice. This paper discusses these three approaches and demonstrates them using an \"acquaintance\" relationship drawn from data residing in common RDF information sources such as FOAF and DBLP data stores. The implementation of each approach is described along with practical considerations for their use. Quantitative and qualitative evaluation results of each approach are presented and the paper concludes with initial suggestions for guiding principles to help in selecting an appropriate approach for integrating heterogeneous semantic data sources.","PeriodicalId":128421,"journal":{"name":"2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127901008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Maag, Christian Mark, H. Krüger, M. Fullerton, F. Busch, A. Leonhardt
Drivers on the road are constantly interacting. The development of new sensor, display and communication systems has laid the basis for ambient intelligence (AmI) technology in road traffic. The effects of such devices on driver behavior and emotional response have to be analyzed, e.g. by using driving simulator studies. By implementing the results of these studies into traffic simulation and "scaling up", emerging effects on the system level can be investigated, e.g. on emotional climate, road safety, and traffic flow. This paper describes an environment for the analysis of AmI systems applied to merging points on highways. Results from an advanced driving simulator study are presented and show positive effects of such a system on anger in driving. The implications of such a device on the system level of road traffic can be investigated by using the instrument of traffic simulation. This is illustrated by evaluating the anger present in the traffic simulator interactions using the results from the driving simulator experiment.
{"title":"Examining Individual and System Level Effects of AmI Traffic Environments","authors":"C. Maag, Christian Mark, H. Krüger, M. Fullerton, F. Busch, A. Leonhardt","doi":"10.1109/WI-IAT.2011.214","DOIUrl":"https://doi.org/10.1109/WI-IAT.2011.214","url":null,"abstract":"Drivers on the road are constantly interacting. The development of new sensor, display and communication systems has laid the basis for ambient intelligence (AmI) technology in road traffic. The effects of such devices on driver behavior and emotional response have to be analyzed, e.g. by using driving simulator studies. By implementing the results of these studies into traffic simulation and \"scaling up\", emerging effects on the system level can be investigated, e.g. on emotional climate, road safety, and traffic flow. This paper describes an environment for the analysis of AmI systems applied to merging points on highways. Results from an advanced driving simulator study are presented and show positive effects of such a system on anger in driving. The implications of such a device on the system level of road traffic can be investigated by using the instrument of traffic simulation. This is illustrated by evaluating the anger present in the traffic simulator interactions using the results from the driving simulator experiment.","PeriodicalId":128421,"journal":{"name":"2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126521623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Building on top of our results on semantic social network analysis, we present a community detection algorithm, SemTagP, that takes benefits of the semantic data that were captured while structuring the RDF graphs of social networks. SemTagP not only offers to detect but also to label communities by exploiting (in addition to the structure of the social graph) the tags used by people during the social tagging process as well as the semantic relations inferred between tags. Doing so, we are able to refine the partitioning of the social graph with semantic processing and to label the activity of detected communities. We tested and evaluated this algorithm on the social network built from Ph.D. theses funded by ADEME, the French Environment and Energy Management Agency. We showed how this approach allows us to detect and label communities of interest and control the precision of the labels.
{"title":"SemTagP: Semantic Community Detection in Folksonomies","authors":"Guillaume Erétéo, Fabien L. Gandon, M. Buffa","doi":"10.1109/WI-IAT.2011.98","DOIUrl":"https://doi.org/10.1109/WI-IAT.2011.98","url":null,"abstract":"Building on top of our results on semantic social network analysis, we present a community detection algorithm, SemTagP, that takes benefits of the semantic data that were captured while structuring the RDF graphs of social networks. SemTagP not only offers to detect but also to label communities by exploiting (in addition to the structure of the social graph) the tags used by people during the social tagging process as well as the semantic relations inferred between tags. Doing so, we are able to refine the partitioning of the social graph with semantic processing and to label the activity of detected communities. We tested and evaluated this algorithm on the social network built from Ph.D. theses funded by ADEME, the French Environment and Energy Management Agency. We showed how this approach allows us to detect and label communities of interest and control the precision of the labels.","PeriodicalId":128421,"journal":{"name":"2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116713490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sébastien Chipeaux, F. Bouquet, C. Lang, N. Marilleau
In this paper, we propose a modeling approach for a spatial complex system. The targeted system is the city with its mobility patterns. The goal of MIRO project is to study service accessibility in the city. In fact, we simulate the city with multi agent systems using them to represent each part of the system(individuals, buildings, streets,). The MIRO team is composed by scientists of several domains (computer sciences, geography or economy), so we want to construct a model of the city to share knowledges of each domain. The next step, we will use verification approach in order to validate the model and then the simulator because we want to generate simulator from model. Thus, we propose a method for modeling such complex system. This method is based on AML (Agent Modeling Language) that is a language well adapted for modeling multi-agent systems. We, then, present a spatial AML meta-model coupled with a method. The use case is the MIRO project.
{"title":"Modelling of Complex Systems with AML as Realized in MIRO Project","authors":"Sébastien Chipeaux, F. Bouquet, C. Lang, N. Marilleau","doi":"10.1109/WI-IAT.2011.195","DOIUrl":"https://doi.org/10.1109/WI-IAT.2011.195","url":null,"abstract":"In this paper, we propose a modeling approach for a spatial complex system. The targeted system is the city with its mobility patterns. The goal of MIRO project is to study service accessibility in the city. In fact, we simulate the city with multi agent systems using them to represent each part of the system(individuals, buildings, streets,). The MIRO team is composed by scientists of several domains (computer sciences, geography or economy), so we want to construct a model of the city to share knowledges of each domain. The next step, we will use verification approach in order to validate the model and then the simulator because we want to generate simulator from model. Thus, we propose a method for modeling such complex system. This method is based on AML (Agent Modeling Language) that is a language well adapted for modeling multi-agent systems. We, then, present a spatial AML meta-model coupled with a method. The use case is the MIRO project.","PeriodicalId":128421,"journal":{"name":"2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123867278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The k-means clustering method is a widely used clustering technique for the Web because of its simplicity and speed. However, the clustering result depends heavily on the chosen initial clustering centers, which are chosen uniformly at random from the data points. We propose a seeding method based on the independent component analysis for the k-means clustering method. We evaluate the performance of our proposed method and compare it with other seeding methods by using benchmark datasets. We applied our proposed method to a Web corpus, which is provided by ODP. The experiments show that the normalized mutual information of our proposed method is better than the normalized mutual information of k-means clustering method and k-means++ clustering method. Therefore, the proposed method is useful for Web corpus.
{"title":"Independent Component Analysis Based Seeding Method for K-Means Clustering","authors":"T. Onoda, Miho Sakai, S. Yamada","doi":"10.1109/WI-IAT.2011.29","DOIUrl":"https://doi.org/10.1109/WI-IAT.2011.29","url":null,"abstract":"The k-means clustering method is a widely used clustering technique for the Web because of its simplicity and speed. However, the clustering result depends heavily on the chosen initial clustering centers, which are chosen uniformly at random from the data points. We propose a seeding method based on the independent component analysis for the k-means clustering method. We evaluate the performance of our proposed method and compare it with other seeding methods by using benchmark datasets. We applied our proposed method to a Web corpus, which is provided by ODP. The experiments show that the normalized mutual information of our proposed method is better than the normalized mutual information of k-means clustering method and k-means++ clustering method. Therefore, the proposed method is useful for Web corpus.","PeriodicalId":128421,"journal":{"name":"2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125114764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years many methods have been proposed, which require semantic annotations of Web services as an input. Such methods include discovery, match-making, composition and execution of Web services in dynamic settings, just to mention few. At the same time automated Web service annotation approaches have been proposed for supporting application of former methods in settings where it is not feasible to provide the annotations manually. However, lack of effective automated evaluation frameworks has seriously limited proper evaluation of the constructed annotations in practical settings where the overall annotation quality of millions of Web services needs to be evaluated. This paper describes an evaluation framework for measuring the quality of semantic annotations of large number of Web services descriptions provided in form of WSDL and XSD documents. The evaluation framework is based on analyzing network properties, namely scale-free and small-world properties, of Web service networks, which in turn have been constructed from semantic annotations of Web services. The evaluation approach is demonstrated through evaluation of a semi-automated annotation approach, which was applied to a set of publicly available WSDL documents describing altogether ca 200 000 Web service operations.
{"title":"Evaluation of a Semi-automated Semantic Annotation Approach for Bootstrapping the Analysis of Large-Scale Web Service Networks","authors":"Shahab Mokarizadeh, Peep Küngas, M. Matskin","doi":"10.1109/WI-IAT.2011.237","DOIUrl":"https://doi.org/10.1109/WI-IAT.2011.237","url":null,"abstract":"In recent years many methods have been proposed, which require semantic annotations of Web services as an input. Such methods include discovery, match-making, composition and execution of Web services in dynamic settings, just to mention few. At the same time automated Web service annotation approaches have been proposed for supporting application of former methods in settings where it is not feasible to provide the annotations manually. However, lack of effective automated evaluation frameworks has seriously limited proper evaluation of the constructed annotations in practical settings where the overall annotation quality of millions of Web services needs to be evaluated. This paper describes an evaluation framework for measuring the quality of semantic annotations of large number of Web services descriptions provided in form of WSDL and XSD documents. The evaluation framework is based on analyzing network properties, namely scale-free and small-world properties, of Web service networks, which in turn have been constructed from semantic annotations of Web services. The evaluation approach is demonstrated through evaluation of a semi-automated annotation approach, which was applied to a set of publicly available WSDL documents describing altogether ca 200 000 Web service operations.","PeriodicalId":128421,"journal":{"name":"2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131276090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper examines the effectiveness of a visualization system for getting insight into future research activities from co-authorship networks. A co-authorship network is important information when doing a research survey. In particular, there are many requests on survey that relate with researchers' future activities, such as identification of growing researchers and supervisors. In previous paper we proposed a visualization system for co-authorship networks, which provides the function for identifying research areas and that for identifying temporal variation of both network structure and keyword distribution. This paper examines its effectiveness through field trials by test participants. The results are examined as the process of hypothesis verification, which shows that test participants could perform the task even though they had no background knowledge about InfoVis.
{"title":"Visualization-Based Support of Hypothesis Verification for Research Survey with Co-authorship Networks","authors":"Takeshi Kurosawa, Y. Takama","doi":"10.1109/WI-IAT.2011.121","DOIUrl":"https://doi.org/10.1109/WI-IAT.2011.121","url":null,"abstract":"This paper examines the effectiveness of a visualization system for getting insight into future research activities from co-authorship networks. A co-authorship network is important information when doing a research survey. In particular, there are many requests on survey that relate with researchers' future activities, such as identification of growing researchers and supervisors. In previous paper we proposed a visualization system for co-authorship networks, which provides the function for identifying research areas and that for identifying temporal variation of both network structure and keyword distribution. This paper examines its effectiveness through field trials by test participants. The results are examined as the process of hypothesis verification, which shows that test participants could perform the task even though they had no background knowledge about InfoVis.","PeriodicalId":128421,"journal":{"name":"2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133508975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}