Pub Date : 2008-11-01DOI: 10.1109/ICDIM.2008.4746725
K. Ishak, B. Archimède, P. Charbonnaud
In this article, a model of service oriented market is presented. It gives a more equitable opportunity of integration in the business markets for the small and medium enterprises. Nevertheless, multi-site planning is a critical and difficult task due to the heterogeneity between planning applications of the various partners. For this objective, the proposed interoperable and distributed architecture SCEP-SOA for multi-site planning is based on SOA (service oriented architecture), integrating the concepts of the generic model of planning and scheduling SCEP (supervisor, customer, environment, and producer). This interoperable architecture enables an applicative interoperability as well as a semantic interoperability between different planning applications used by the partners.
{"title":"Enhancing interoperability between enterprise planning applications: An architectural framework","authors":"K. Ishak, B. Archimède, P. Charbonnaud","doi":"10.1109/ICDIM.2008.4746725","DOIUrl":"https://doi.org/10.1109/ICDIM.2008.4746725","url":null,"abstract":"In this article, a model of service oriented market is presented. It gives a more equitable opportunity of integration in the business markets for the small and medium enterprises. Nevertheless, multi-site planning is a critical and difficult task due to the heterogeneity between planning applications of the various partners. For this objective, the proposed interoperable and distributed architecture SCEP-SOA for multi-site planning is based on SOA (service oriented architecture), integrating the concepts of the generic model of planning and scheduling SCEP (supervisor, customer, environment, and producer). This interoperable architecture enables an applicative interoperability as well as a semantic interoperability between different planning applications used by the partners.","PeriodicalId":415013,"journal":{"name":"2008 Third International Conference on Digital Information Management","volume":"310 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131895625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-01DOI: 10.1109/ICDIM.2008.4746826
W. Isa, N. Noor, Shafie Mehad
Website information architecture (IA) has transcended into a discipline that is concerned with design principle and architecture of information in digital landscape. IA models in web-mediated environment however lack theoretical perspectives, empirical evidence and cultural context. In our effort to enlighten these aforementioned, we proposed the Web architectural-inducing model (WA-IM) for IA. We conceptualized website IA as multidimensional constructs and explore the applicability of WA-IM for IA. We conducted a web-based survey to 427 Muslim online user as the cultural case study and examine the expectations of using IA in culture-centred website; i.e., Islamic genre website. Construct validation of the multifactor structure of website IA was assessed via confirmatory factor analysis (CFA), using structural equation modeling (SEM). A five factor hypothesis goodness fit model was evaluated where the CFA verified that website IA is composed of multidimensional constructs of five factors: 'content-information', 'content-trust', 'navigation-trait', 'navigation-wayfinding' and 'context-information design'.
{"title":"Exploring the applicability of Web Architectural-Inducing Model (WA-IM) for Information Architecture in cultural context: A structural equation modeling approach","authors":"W. Isa, N. Noor, Shafie Mehad","doi":"10.1109/ICDIM.2008.4746826","DOIUrl":"https://doi.org/10.1109/ICDIM.2008.4746826","url":null,"abstract":"Website information architecture (IA) has transcended into a discipline that is concerned with design principle and architecture of information in digital landscape. IA models in web-mediated environment however lack theoretical perspectives, empirical evidence and cultural context. In our effort to enlighten these aforementioned, we proposed the Web architectural-inducing model (WA-IM) for IA. We conceptualized website IA as multidimensional constructs and explore the applicability of WA-IM for IA. We conducted a web-based survey to 427 Muslim online user as the cultural case study and examine the expectations of using IA in culture-centred website; i.e., Islamic genre website. Construct validation of the multifactor structure of website IA was assessed via confirmatory factor analysis (CFA), using structural equation modeling (SEM). A five factor hypothesis goodness fit model was evaluated where the CFA verified that website IA is composed of multidimensional constructs of five factors: 'content-information', 'content-trust', 'navigation-trait', 'navigation-wayfinding' and 'context-information design'.","PeriodicalId":415013,"journal":{"name":"2008 Third International Conference on Digital Information Management","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127962660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-01DOI: 10.1109/ICDIM.2008.4746720
Ina Blümel, J. Diet, Harald Krottmaier
In this paper, we describe a digital library initiative for non-textual documents. The proposed framework will integrate different types of content-repositories - each one specialized for a specific multimedia domain - into one seamless system and will add features such as automatic annotation, full-text retrieval and recommender services to non-textual documents. Two multimedia domains, 3D graphics and music, will be introduced. The repositories can be searched using both textual (metadata-based) and non-textual retrieval mechanisms (e.g. using a complex sketch-based interface for searching in 3D-models or a query-by-humming interface for music). Domain-specific metadata models are developed and workflows for automated content-based data analysis and indexing proposed.
{"title":"Integrating multimedia repositories into the PROBADO framework","authors":"Ina Blümel, J. Diet, Harald Krottmaier","doi":"10.1109/ICDIM.2008.4746720","DOIUrl":"https://doi.org/10.1109/ICDIM.2008.4746720","url":null,"abstract":"In this paper, we describe a digital library initiative for non-textual documents. The proposed framework will integrate different types of content-repositories - each one specialized for a specific multimedia domain - into one seamless system and will add features such as automatic annotation, full-text retrieval and recommender services to non-textual documents. Two multimedia domains, 3D graphics and music, will be introduced. The repositories can be searched using both textual (metadata-based) and non-textual retrieval mechanisms (e.g. using a complex sketch-based interface for searching in 3D-models or a query-by-humming interface for music). Domain-specific metadata models are developed and workflows for automated content-based data analysis and indexing proposed.","PeriodicalId":415013,"journal":{"name":"2008 Third International Conference on Digital Information Management","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129211530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-01DOI: 10.1109/ICDIM.2008.4746821
S. Mrdović, B. Perunicic-Drazenovic
This paper presents a novel payload analysis method. Consecutive bytes are separated by boundary symbols and defined as words. The frequencies of word appearance and word to word transitions are used to build a model of normal behavior. A simple anomaly score calculation is designed for fast attack detection. The method was tested using real traffic and recent attacks to demonstrate that it can be used in IDS. Tolerance to small number of attack in training data is shown.
{"title":"NIDS based on payload word frequencies and anomaly of transitions","authors":"S. Mrdović, B. Perunicic-Drazenovic","doi":"10.1109/ICDIM.2008.4746821","DOIUrl":"https://doi.org/10.1109/ICDIM.2008.4746821","url":null,"abstract":"This paper presents a novel payload analysis method. Consecutive bytes are separated by boundary symbols and defined as words. The frequencies of word appearance and word to word transitions are used to build a model of normal behavior. A simple anomaly score calculation is designed for fast attack detection. The method was tested using real traffic and recent attacks to demonstrate that it can be used in IDS. Tolerance to small number of attack in training data is shown.","PeriodicalId":415013,"journal":{"name":"2008 Third International Conference on Digital Information Management","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128849462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-01DOI: 10.1109/ICDIM.2008.4746781
Bo Yang, Jian Qin
The study presented in this paper exploited several possible ways to meet the needs of link analysis in Webometrics and developed a prototype (LinkDiscoverer) that collects data from both real-time links and search engines. The prototype consists of two parts: a crawling part for collecting real-time link data from a given domain or site and a search engine part for harvesting link data from search engines by using specific search commands. An experiment was conducted to evaluate the performance of LinkDiscoverer on link analysis. The results show that the LinkDiscovererpsilas functions can well satisfy the needs for link analysis. This study contributes to data collection methods and selection strategy in Webometrics.
{"title":"Data collection system for link analysis","authors":"Bo Yang, Jian Qin","doi":"10.1109/ICDIM.2008.4746781","DOIUrl":"https://doi.org/10.1109/ICDIM.2008.4746781","url":null,"abstract":"The study presented in this paper exploited several possible ways to meet the needs of link analysis in Webometrics and developed a prototype (LinkDiscoverer) that collects data from both real-time links and search engines. The prototype consists of two parts: a crawling part for collecting real-time link data from a given domain or site and a search engine part for harvesting link data from search engines by using specific search commands. An experiment was conducted to evaluate the performance of LinkDiscoverer on link analysis. The results show that the LinkDiscovererpsilas functions can well satisfy the needs for link analysis. This study contributes to data collection methods and selection strategy in Webometrics.","PeriodicalId":415013,"journal":{"name":"2008 Third International Conference on Digital Information Management","volume":"310 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117070722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-01DOI: 10.1109/ICDIM.2008.4746715
Francisco M. Carrero, José Carlos Cortizo, J. M. G. Hidalgo
MetaMap is an online application that allows mapping text to UMLS Metathesaurus concepts, which is very useful for interoperability among different languages and systems within the biomedical domain. MetaMap Transfer (MMTx) is a Java program that makes MetaMap available to biomedical researchers in controlled, configurable environment. Currently there is no Spanish version of MetaMap, which difficult the use of UMLS Metathesaurus to extract concepts from Spanish biomedical texts. Developing a Spanish version of MetaMap would be a huge task, since there has been a lot of work supporting the English version for the last sixteen years. Our ongoing research is mainly focused on using biomedical concepts for crosslingual text classification. In this context the use of concepts instead of bag of words representation allows us to face text classification tasks abstracting from the language. In this paper we show our experiments on combining automatic translation techniques with the use of biomedical ontologies to produce an English text that can be processed by MMTx in order to extract concepts for text classification.
MetaMap是一个在线应用程序,它允许将文本映射到UMLS元词典概念,这对于生物医学领域内不同语言和系统之间的互操作性非常有用。MetaMap Transfer (MMTx)是一个Java程序,它使生物医学研究人员可以在受控的、可配置的环境中使用MetaMap。目前还没有西班牙语版本的MetaMap,这使得使用UMLS metthesaurus从西班牙生物医学文本中提取概念变得困难。开发西班牙语版本的MetaMap将是一项巨大的任务,因为在过去的16年里已经有很多工作支持英语版本。我们正在进行的研究主要集中在使用生物医学概念进行跨语言文本分类。在这种情况下,使用概念代替词袋表示使我们能够面对从语言中抽象出来的文本分类任务。在本文中,我们展示了我们将自动翻译技术与生物医学本体相结合的实验,以产生可以由MMTx处理的英语文本,以便提取用于文本分类的概念。
{"title":"Testing concept indexing in crosslingual medical text classification","authors":"Francisco M. Carrero, José Carlos Cortizo, J. M. G. Hidalgo","doi":"10.1109/ICDIM.2008.4746715","DOIUrl":"https://doi.org/10.1109/ICDIM.2008.4746715","url":null,"abstract":"MetaMap is an online application that allows mapping text to UMLS Metathesaurus concepts, which is very useful for interoperability among different languages and systems within the biomedical domain. MetaMap Transfer (MMTx) is a Java program that makes MetaMap available to biomedical researchers in controlled, configurable environment. Currently there is no Spanish version of MetaMap, which difficult the use of UMLS Metathesaurus to extract concepts from Spanish biomedical texts. Developing a Spanish version of MetaMap would be a huge task, since there has been a lot of work supporting the English version for the last sixteen years. Our ongoing research is mainly focused on using biomedical concepts for crosslingual text classification. In this context the use of concepts instead of bag of words representation allows us to face text classification tasks abstracting from the language. In this paper we show our experiments on combining automatic translation techniques with the use of biomedical ontologies to produce an English text that can be processed by MMTx in order to extract concepts for text classification.","PeriodicalId":415013,"journal":{"name":"2008 Third International Conference on Digital Information Management","volume":"266 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117142042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-01DOI: 10.1109/ICDIM.2008.4746713
Teppei Shimada, T. Tsuji, K. Higuchi
Multidimensional arrays storing multidimensional data in MOLAP are usually very sparse. They also suffer from the problem that the time consumed in sequential access to array elements heavily depends on the dimension along which the elements are accessed. This problem of ldquodimension dependencyrdquo would be alleviated by dividing the whole array into the set of smaller hypercube shaped subarrays called ldquochunksrdquo. But the chunks are also sparse and should be compressed. However, further dimension dependency in accessing array elements would be caused, unless these compressed chunks are arranged judiciously in the page buffer. The difference among the dimension cardinalities could also cause dimension dependency; slice operation along a dimension of large cardinality tends to consume much time. We will alleviate these two kinds of dimension dependency by introducing the notion of an ldquoextended chunkrdquo. Extended chunks can adapt flexibly to the general situation where data densities in chunks are low and are not uniformly distributed. Employing extended chunks, we will propose some secondary storage schemes for a multidimensional array using a space-filling curve such as Z-curve. The evaluation result shows that the proposed storage schemes exhibit good performance while alleviating the dimension dependency.
{"title":"A storage scheme for multidimensional data alleviating dimension dependency","authors":"Teppei Shimada, T. Tsuji, K. Higuchi","doi":"10.1109/ICDIM.2008.4746713","DOIUrl":"https://doi.org/10.1109/ICDIM.2008.4746713","url":null,"abstract":"Multidimensional arrays storing multidimensional data in MOLAP are usually very sparse. They also suffer from the problem that the time consumed in sequential access to array elements heavily depends on the dimension along which the elements are accessed. This problem of ldquodimension dependencyrdquo would be alleviated by dividing the whole array into the set of smaller hypercube shaped subarrays called ldquochunksrdquo. But the chunks are also sparse and should be compressed. However, further dimension dependency in accessing array elements would be caused, unless these compressed chunks are arranged judiciously in the page buffer. The difference among the dimension cardinalities could also cause dimension dependency; slice operation along a dimension of large cardinality tends to consume much time. We will alleviate these two kinds of dimension dependency by introducing the notion of an ldquoextended chunkrdquo. Extended chunks can adapt flexibly to the general situation where data densities in chunks are low and are not uniformly distributed. Employing extended chunks, we will propose some secondary storage schemes for a multidimensional array using a space-filling curve such as Z-curve. The evaluation result shows that the proposed storage schemes exhibit good performance while alleviating the dimension dependency.","PeriodicalId":415013,"journal":{"name":"2008 Third International Conference on Digital Information Management","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126141923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-01DOI: 10.1109/ICDIM.2008.4746791
Francisco M. Couto, Tiago Grego, Hugo P. Bastos, Catia Pesquita, Rafael P. Torres Jiménez, Pablo Sánchez, Leandro Pascual, C. Blaschke
An important research topic in Bioinformatics involves the exploration of vast amounts of biological and biomedical scientific literature (BioLiterature). Over the last few decades, text-mining systems have exploited this BioLiterature to reduce the time spent by researchers in its analysis. However, state-of-the-art approaches are still far from reaching performance levels acceptable by curators, and below the performance obtained in other domains, such as personal name recognition or news text. To achieve high levels of performance, it is essential that text mining tools effectively recognize bioentities present in BioLiterature. This paper presents FIBRE (Filtering Bioentity Recognition Errors), a system for automatically filtering mis annotations generated by rule-based systems that automatically recognize bioentities in BioLiterature. FIBRE aims at using different sets of automatically generated annotations to identify the main features that characterize an annotation of being of a certain type. These features are then used to filter mis annotations using a confidence threshold. The assessment of FIBRE was performed on a set of more than 17,000 documents, previously annotated by Text Detective, a state-of-the-art rule-based name bioentity recognition system. Curators evaluated the gene annotations given by Text Detective that FIBRE classified as non-gene annotations, and we found that FIBRE was able to filter with a precision above 92% more than 600 mis annotations, requiring minimal human effort, which demonstrates the effectiveness of FIBRE in a realistic scenario.
{"title":"Identifying bioentity recognition errors of rule-based text-mining systems","authors":"Francisco M. Couto, Tiago Grego, Hugo P. Bastos, Catia Pesquita, Rafael P. Torres Jiménez, Pablo Sánchez, Leandro Pascual, C. Blaschke","doi":"10.1109/ICDIM.2008.4746791","DOIUrl":"https://doi.org/10.1109/ICDIM.2008.4746791","url":null,"abstract":"An important research topic in Bioinformatics involves the exploration of vast amounts of biological and biomedical scientific literature (BioLiterature). Over the last few decades, text-mining systems have exploited this BioLiterature to reduce the time spent by researchers in its analysis. However, state-of-the-art approaches are still far from reaching performance levels acceptable by curators, and below the performance obtained in other domains, such as personal name recognition or news text. To achieve high levels of performance, it is essential that text mining tools effectively recognize bioentities present in BioLiterature. This paper presents FIBRE (Filtering Bioentity Recognition Errors), a system for automatically filtering mis annotations generated by rule-based systems that automatically recognize bioentities in BioLiterature. FIBRE aims at using different sets of automatically generated annotations to identify the main features that characterize an annotation of being of a certain type. These features are then used to filter mis annotations using a confidence threshold. The assessment of FIBRE was performed on a set of more than 17,000 documents, previously annotated by Text Detective, a state-of-the-art rule-based name bioentity recognition system. Curators evaluated the gene annotations given by Text Detective that FIBRE classified as non-gene annotations, and we found that FIBRE was able to filter with a precision above 92% more than 600 mis annotations, requiring minimal human effort, which demonstrates the effectiveness of FIBRE in a realistic scenario.","PeriodicalId":415013,"journal":{"name":"2008 Third International Conference on Digital Information Management","volume":"46 3-4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120993687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-01DOI: 10.1109/ICDIM.2008.4746802
M. Mahmoud, M. Rafea, A. Rafea
Although there are many developed expert systems in the world, little consideration has been given to the impacts resulting from their use. There is a difference between the developed expert systems in the laboratories for research and demonstration and the expert systems that can be applied in the fields. The applied expert systems must cover the end users requirements and meet some other evaluation criteria. ESs are evaluated both in the laboratory and in the fields. In the laboratory evaluation, there is an evaluation methodology to guarantee that the ES can be used in the field. The field evaluation is achieved by applying field experiments. Those field experiments showed that fields managed by the ES are better than the control fields. In this paper, the evaluation criteria that guarantee the success of ESs deployed in the fields are presented. These evaluation criteria have been applied on three ES applications, namely: CITEX for citrus cultivation, CUPTEX for cucumber cultivation under plastic tunnel, and NEPER for wheat cultivation. Expert systems potentially have several different types or categories of impact relative to the applied domain. The field experiments are used to evaluate the economical and environmental impacts of the ES. The economical impact includes the cost, profit, and yield. The environmental impact includes the effect of using the ES on water and soil conservation, and also on decreasing the amount of pesticides used in the fields.
{"title":"Using expert systems technology to increase agriculture production and water conservation","authors":"M. Mahmoud, M. Rafea, A. Rafea","doi":"10.1109/ICDIM.2008.4746802","DOIUrl":"https://doi.org/10.1109/ICDIM.2008.4746802","url":null,"abstract":"Although there are many developed expert systems in the world, little consideration has been given to the impacts resulting from their use. There is a difference between the developed expert systems in the laboratories for research and demonstration and the expert systems that can be applied in the fields. The applied expert systems must cover the end users requirements and meet some other evaluation criteria. ESs are evaluated both in the laboratory and in the fields. In the laboratory evaluation, there is an evaluation methodology to guarantee that the ES can be used in the field. The field evaluation is achieved by applying field experiments. Those field experiments showed that fields managed by the ES are better than the control fields. In this paper, the evaluation criteria that guarantee the success of ESs deployed in the fields are presented. These evaluation criteria have been applied on three ES applications, namely: CITEX for citrus cultivation, CUPTEX for cucumber cultivation under plastic tunnel, and NEPER for wheat cultivation. Expert systems potentially have several different types or categories of impact relative to the applied domain. The field experiments are used to evaluate the economical and environmental impacts of the ES. The economical impact includes the cost, profit, and yield. The environmental impact includes the effect of using the ES on water and soil conservation, and also on decreasing the amount of pesticides used in the fields.","PeriodicalId":415013,"journal":{"name":"2008 Third International Conference on Digital Information Management","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127896212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-01DOI: 10.1109/ICDIM.2008.4746787
Katsuhiro Suzuki, J. Sakata, J. Hosoya
In order to analyze the progress of research based on the integration of leading edge technologies, this paper introduces three types of patent data categorization, namely the Mix, Only and Mono-IPC type, respectively, expanding the concept of IPC Co-Occurrence. Additionally, the concept of innovation coordinate is introduced as a mean to investigate status and trends of R&D in the field of fuel cells for the years from 2000 to 2004. It is shown that yearly positions determined by the sets of patent applications in Japan concerning fuel cell kept changing during the period; occupation ratio of MIX type inventions decreased while those of Only and Mono-IPC type increased. The declining trend of MIX type, which is considered to be common in the convergence process of technology fusion toward the launch of new products based on cutting-edge technologies, is also observed in the field of micro electro mechanical systems (MEMS). Our future work includes both the trilateral comparison by adding European and US patent datasets, and also, hybrid analyses by introducing external data such as R&D expenditures to investigate the validity and/or the application limit of the present analysis in the quantitative study of the dynamics of innovation.
{"title":"An empirical analysis on progress of technology fusion","authors":"Katsuhiro Suzuki, J. Sakata, J. Hosoya","doi":"10.1109/ICDIM.2008.4746787","DOIUrl":"https://doi.org/10.1109/ICDIM.2008.4746787","url":null,"abstract":"In order to analyze the progress of research based on the integration of leading edge technologies, this paper introduces three types of patent data categorization, namely the Mix, Only and Mono-IPC type, respectively, expanding the concept of IPC Co-Occurrence. Additionally, the concept of innovation coordinate is introduced as a mean to investigate status and trends of R&D in the field of fuel cells for the years from 2000 to 2004. It is shown that yearly positions determined by the sets of patent applications in Japan concerning fuel cell kept changing during the period; occupation ratio of MIX type inventions decreased while those of Only and Mono-IPC type increased. The declining trend of MIX type, which is considered to be common in the convergence process of technology fusion toward the launch of new products based on cutting-edge technologies, is also observed in the field of micro electro mechanical systems (MEMS). Our future work includes both the trilateral comparison by adding European and US patent datasets, and also, hybrid analyses by introducing external data such as R&D expenditures to investigate the validity and/or the application limit of the present analysis in the quantitative study of the dynamics of innovation.","PeriodicalId":415013,"journal":{"name":"2008 Third International Conference on Digital Information Management","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123481432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}