With the activities in the online virtual world becoming more and more complex, online virtual world seems to have developing into a virtual economy. This article develops a Gross Replacement Model to analyze the effect of online virtual activities on the macro-economy. We propose that virtual activities, which include in-world activities and inter-world activity, do generate economic value and affect the macro-economy through three ways: when game-players allocate more time in virtual world and replace their real-world activities with virtual activities, it could generate the replacement effect on real Gross Domestic Product and transfer effect on Gross Virtual Product in the virtual world at the same time, but also increase complementary effect on the real income generated by the network industry. Results from survey proved that there exists the replacement effect. We developed a method of measuring the three effects of the virtual activity on real economy. Data from Second Life demonstrates that the virtual game has positive effect on the real GDP. Finally, we propose the suggestion that virtual economic activities should be included into the Mega-GDP to reflect the effect of network on the whole economy.
{"title":"From Time Replacement to Gross Replacement: The Effect of Online Virtual Game on Real Economy","authors":"Hui Peng, Hong Wu","doi":"10.1109/ICIME.2009.81","DOIUrl":"https://doi.org/10.1109/ICIME.2009.81","url":null,"abstract":"With the activities in the online virtual world becoming more and more complex, online virtual world seems to have developing into a virtual economy. This article develops a Gross Replacement Model to analyze the effect of online virtual activities on the macro-economy. We propose that virtual activities, which include in-world activities and inter-world activity, do generate economic value and affect the macro-economy through three ways: when game-players allocate more time in virtual world and replace their real-world activities with virtual activities, it could generate the replacement effect on real Gross Domestic Product and transfer effect on Gross Virtual Product in the virtual world at the same time, but also increase complementary effect on the real income generated by the network industry. Results from survey proved that there exists the replacement effect. We developed a method of measuring the three effects of the virtual activity on real economy. Data from Second Life demonstrates that the virtual game has positive effect on the real GDP. Finally, we propose the suggestion that virtual economic activities should be included into the Mega-GDP to reflect the effect of network on the whole economy.","PeriodicalId":445284,"journal":{"name":"2009 International Conference on Information Management and Engineering","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127797922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohamad Faizal Ab Jabal, M. Rahim, Nur Zuraifah Syazrah Othman, Zahabidin Jupri
In recent years, various researchers have put in great effort to produce an efficient method of drawing extraction. This paper will focus on CAD data extraction from CAD drawing and study the method that has been proposed by previous researchers. CAD data extraction became a popular research since the early 80’s. Nowadays, most applications in engineering field are already computerized. This includes the CAD application system, the systems used by engineers to design their products. As the use of computerized application became important tool in engineering field, the production field is also affected. This raises the issue of integrating CAD with manufacture systems. For that reason, most researchers try to create a system that can extract meaningful information from the CAD drawing and create a connection between CAD and manufacture system. For example in manufacturing field, manufacture system is a machine system where it is also known as CAM systems. However, there is no direct connection from CAD system to CAM system. Therefore, many approaches have been proposed by the previous researchers to solve the issues. Focus on this paper is to study the approaches and make comparison among it. Finding from this paper is suitable approach can be used for next stage in this research.
{"title":"A Comparative Study on Extraction and Recognition Method of CAD Data from CAD Drawings","authors":"Mohamad Faizal Ab Jabal, M. Rahim, Nur Zuraifah Syazrah Othman, Zahabidin Jupri","doi":"10.1109/ICIME.2009.56","DOIUrl":"https://doi.org/10.1109/ICIME.2009.56","url":null,"abstract":"In recent years, various researchers have put in great effort to produce an efficient method of drawing extraction. This paper will focus on CAD data extraction from CAD drawing and study the method that has been proposed by previous researchers. CAD data extraction became a popular research since the early 80’s. Nowadays, most applications in engineering field are already computerized. This includes the CAD application system, the systems used by engineers to design their products. As the use of computerized application became important tool in engineering field, the production field is also affected. This raises the issue of integrating CAD with manufacture systems. For that reason, most researchers try to create a system that can extract meaningful information from the CAD drawing and create a connection between CAD and manufacture system. For example in manufacturing field, manufacture system is a machine system where it is also known as CAM systems. However, there is no direct connection from CAD system to CAM system. Therefore, many approaches have been proposed by the previous researchers to solve the issues. Focus on this paper is to study the approaches and make comparison among it. Finding from this paper is suitable approach can be used for next stage in this research.","PeriodicalId":445284,"journal":{"name":"2009 International Conference on Information Management and Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120980883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patterns and classification of stock or inventory data is very important for business support and decision making. Timely identification of newly emerging trends is also needed in business process. Sales patterns from inventory data indicate market trends and can be used in forecasting which has great potential for decision making, strategic planning and market competition. The objectives in this research are to get better decision making for improving sale, services and quality as to identify the reasons of dead stock, slow-moving, and fast-moving products which is useful mechanism for business support, investment and surveillance. In this paper we proposed an algorithm for mining patterns of huge stock data to predict factors affecting the sale of products. In the first phase, we divide the stock data in three different clusters on the basis of product categories and sold quantities i.e. Dead-Stock (DS), Slow-Moving (SM) and Fast-Moving (FM) using K-means algorithm. In the second phase we have proposed Most Frequent Pattern (MFP) algorithm to find frequencies of property values of the corresponding items. MFP provides frequent patterns of item attributes in each category of products and also gives sales trend in a compact form. The experimental result shows that the proposed hybrid k-mean plus MFP algorithm can generate more useful pattern from large stock data.
{"title":"Frequent Patterns Minning of Stock Data Using Hybrid Clustering Association Algorithm","authors":"Aurangzeb Khan, Khairullah Khan, B. Baharudin","doi":"10.1109/ICIME.2009.129","DOIUrl":"https://doi.org/10.1109/ICIME.2009.129","url":null,"abstract":"Patterns and classification of stock or inventory data is very important for business support and decision making. Timely identification of newly emerging trends is also needed in business process. Sales patterns from inventory data indicate market trends and can be used in forecasting which has great potential for decision making, strategic planning and market competition. The objectives in this research are to get better decision making for improving sale, services and quality as to identify the reasons of dead stock, slow-moving, and fast-moving products which is useful mechanism for business support, investment and surveillance. In this paper we proposed an algorithm for mining patterns of huge stock data to predict factors affecting the sale of products. In the first phase, we divide the stock data in three different clusters on the basis of product categories and sold quantities i.e. Dead-Stock (DS), Slow-Moving (SM) and Fast-Moving (FM) using K-means algorithm. In the second phase we have proposed Most Frequent Pattern (MFP) algorithm to find frequencies of property values of the corresponding items. MFP provides frequent patterns of item attributes in each category of products and also gives sales trend in a compact form. The experimental result shows that the proposed hybrid k-mean plus MFP algorithm can generate more useful pattern from large stock data.","PeriodicalId":445284,"journal":{"name":"2009 International Conference on Information Management and Engineering","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116999800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research introduced the simulation and the analysis performed through experimental setup to identify the best reflectance fiber probe configuration that able to measure the highest intensity of reflectance light from a sample. The reflectance fiber probe consists of two strands of fiber cable, one is for light emitting and another is to retrieve the reflected (or backscattered light) from the sample. The result in this research will assist in the entire development of spectroscopy kits for biological applications. The high intensity of backscattered light is desired in the measurement since it should meet the capability of the optical sensor to perform its measurement with high efficiency and accuracy. The fiber probe used in the design has the core with diameter of 1mm. The simulation of the optical design was conducted using ASAP software. It is identified that the highest intensity of backscattered light can be measured when the distance between probe’s end and the 100% reflective sample is put at 2 mm and the distance between the emitting and retrieving fiber cable is set to be at 0 mm. The consecutive simulation shows that the further the distance between the two fiber cables will lead to decreasing capacity of backscattered light. In the experimental setup, an optical fiber sensor is used to perform the measurement of backscattered light. Plastic optical fiber with core diameter of 1 mm is used as fiber probe. The distance between the two fiber cores is fixed to be at 1 mm and the distance between probe and sample is adjusted for every 1 mm to identify the distance that can produce the best intensity of backscattered light. Two samples have been used in the experiment which is mirror and white Spectralon. The highest intensity of backscattered light was identified at distance of 3 mm and 4 mm for mirror and Spectralon respectively.
{"title":"Identification of Reflectance Fiber Probe Configurations Efficiency through ASAP Simulation and Optical Fiber Sensor","authors":"A. Omar, M. MatJafri","doi":"10.1109/ICIME.2009.30","DOIUrl":"https://doi.org/10.1109/ICIME.2009.30","url":null,"abstract":"This research introduced the simulation and the analysis performed through experimental setup to identify the best reflectance fiber probe configuration that able to measure the highest intensity of reflectance light from a sample. The reflectance fiber probe consists of two strands of fiber cable, one is for light emitting and another is to retrieve the reflected (or backscattered light) from the sample. The result in this research will assist in the entire development of spectroscopy kits for biological applications. The high intensity of backscattered light is desired in the measurement since it should meet the capability of the optical sensor to perform its measurement with high efficiency and accuracy. The fiber probe used in the design has the core with diameter of 1mm. The simulation of the optical design was conducted using ASAP software. It is identified that the highest intensity of backscattered light can be measured when the distance between probe’s end and the 100% reflective sample is put at 2 mm and the distance between the emitting and retrieving fiber cable is set to be at 0 mm. The consecutive simulation shows that the further the distance between the two fiber cables will lead to decreasing capacity of backscattered light. In the experimental setup, an optical fiber sensor is used to perform the measurement of backscattered light. Plastic optical fiber with core diameter of 1 mm is used as fiber probe. The distance between the two fiber cores is fixed to be at 1 mm and the distance between probe and sample is adjusted for every 1 mm to identify the distance that can produce the best intensity of backscattered light. Two samples have been used in the experiment which is mirror and white Spectralon. The highest intensity of backscattered light was identified at distance of 3 mm and 4 mm for mirror and Spectralon respectively.","PeriodicalId":445284,"journal":{"name":"2009 International Conference on Information Management and Engineering","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123772247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evaluating the computer based geostatiscial methods will eliminate the unequal soil phosphorus variability on agricultural fields. These methods may commonly be useable for simulation of spatial variability of agricultural phosphorus on these areas. It will be valuable for balanced phosphorus consumption by crops and reduced environmental pollution. In this study, topsoil (0-20 cm) and subsoil (20-40 cm) samples based on 20 X 20 m grids were collected from the plots under the sugarbeet plants. Plant samples were also collected from the same plots. The soil and plant samples were prepared for analysis. The data concerning with phosphorus levels were analyzed through Kriging interpolations, which are the computer based geostatistical methods. To achieve cross-validation, distribution percentages were formed by using all Kriging methods. As a result of cross validations, the best optimal method was found to be Simple Kriging interpolation method for each data group (Ordinary RMS, plus or minus 6.38, Simple RMS, plus or minus 5.98 Universal RMS, plus or minus 6.41). By using this method, semivariogram models were tested, and exponential semivariogram model was found as the most suitable model for the experimental data group. Soil and plant phosphorus distribution faces were adequately determined by using selected simple Kriging interpolation method and suitable semivariogram model. These distribution faces were processed by software 3D analyst modul to enable three dimensional mapping.
{"title":"Computer Based Geostatistical Strategies in Assessing of Spatial Variability of Agricultural Phosphorus on a Sugarbeet Field","authors":"M. Karaman, T. Susam, Servet Yaprak, F. Er","doi":"10.1109/ICIME.2009.70","DOIUrl":"https://doi.org/10.1109/ICIME.2009.70","url":null,"abstract":"Evaluating the computer based geostatiscial methods will eliminate the unequal soil phosphorus variability on agricultural fields. These methods may commonly be useable for simulation of spatial variability of agricultural phosphorus on these areas. It will be valuable for balanced phosphorus consumption by crops and reduced environmental pollution. In this study, topsoil (0-20 cm) and subsoil (20-40 cm) samples based on 20 X 20 m grids were collected from the plots under the sugarbeet plants. Plant samples were also collected from the same plots. The soil and plant samples were prepared for analysis. The data concerning with phosphorus levels were analyzed through Kriging interpolations, which are the computer based geostatistical methods. To achieve cross-validation, distribution percentages were formed by using all Kriging methods. As a result of cross validations, the best optimal method was found to be Simple Kriging interpolation method for each data group (Ordinary RMS, plus or minus 6.38, Simple RMS, plus or minus 5.98 Universal RMS, plus or minus 6.41). By using this method, semivariogram models were tested, and exponential semivariogram model was found as the most suitable model for the experimental data group. Soil and plant phosphorus distribution faces were adequately determined by using selected simple Kriging interpolation method and suitable semivariogram model. These distribution faces were processed by software 3D analyst modul to enable three dimensional mapping.","PeriodicalId":445284,"journal":{"name":"2009 International Conference on Information Management and Engineering","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125745107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Processing huge amount of collected network data to identify network intrusions needs high computational cost. Reducing features in the collected data may therefore solve the problem. We proposed an approach for obtaining optimal number of features to build an efficient model for intrusion detection system (IDS). Two feature selection algorithms were involved to generate two feature sets. These two features sets were then utilized to produce a combined and a shared feature set, respectively. The shared feature set consisted of features agreed by the two feature selection algorithms and therefore considered important features for identifying intrusions. Human intervention was then conducted to find an optimal number of features in between the combined (maximum) and shared feature sets (minimum). Empirical results showed that the proposed feature set gave equivalent results compared to the feature sets generated by the selected feature selection methods, and combined feature sets.
{"title":"A Feature Selection Approach for Network Intrusion Detection","authors":"Kok-Chin Khor, Choo-Yee Ting, Somnuk-Phon Amnuaisuk","doi":"10.1109/ICIME.2009.68","DOIUrl":"https://doi.org/10.1109/ICIME.2009.68","url":null,"abstract":"Processing huge amount of collected network data to identify network intrusions needs high computational cost. Reducing features in the collected data may therefore solve the problem. We proposed an approach for obtaining optimal number of features to build an efficient model for intrusion detection system (IDS). Two feature selection algorithms were involved to generate two feature sets. These two features sets were then utilized to produce a combined and a shared feature set, respectively. The shared feature set consisted of features agreed by the two feature selection algorithms and therefore considered important features for identifying intrusions. Human intervention was then conducted to find an optimal number of features in between the combined (maximum) and shared feature sets (minimum). Empirical results showed that the proposed feature set gave equivalent results compared to the feature sets generated by the selected feature selection methods, and combined feature sets.","PeriodicalId":445284,"journal":{"name":"2009 International Conference on Information Management and Engineering","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126905647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Integration of portable, energy efficient computing devices with clothing results in possibilities of wear-ware. Wearable network is composed of all these tiny interactive devices, which is highly appealing now a day's. A Wearable body area network (WBAN) is the emerging technology that is developed for wearable monitoring application.
{"title":"Wearable Wireless Body Area Networks","authors":"Farhana Tufail, M. Islam","doi":"10.1109/ICIME.2009.142","DOIUrl":"https://doi.org/10.1109/ICIME.2009.142","url":null,"abstract":"Integration of portable, energy efficient computing devices with clothing results in possibilities of wear-ware. Wearable network is composed of all these tiny interactive devices, which is highly appealing now a day's. A Wearable body area network (WBAN) is the emerging technology that is developed for wearable monitoring application.","PeriodicalId":445284,"journal":{"name":"2009 International Conference on Information Management and Engineering","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126917385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The idea of increasing data accessibility in Mobile Ad Hoc Networks (MANETs) by replicating data still remains a challenge due to the inherent unreliable and unstable nature of MANETs. Being aware of relative host mobility changes provides useful instability information of mobile hosts and helps in managing replicas. This paper introduces a new concept as Node Interactivity Self Assessment (NISA), or more conceptual sociability, which presents the percentage of relative movement for each mobile host and can be obtained by considering the diversity of incoming requests from the nodes in the vicinity. Using this concept, we propose a novel replication technique. Simulation results show that by this technique, we achieve a balanced energy consumption as well as high data accessibility among the nodes in the network.This idea comes and being extended from the simple phrase that the more interactive you are, the more knowledge you have!
{"title":"Node Interactivity Self Assessment (NISA) and Data Replication in MANETs","authors":"S. S. Sadeghi, S. Jabbehdari","doi":"10.1109/ICIME.2009.134","DOIUrl":"https://doi.org/10.1109/ICIME.2009.134","url":null,"abstract":"The idea of increasing data accessibility in Mobile Ad Hoc Networks (MANETs) by replicating data still remains a challenge due to the inherent unreliable and unstable nature of MANETs. Being aware of relative host mobility changes provides useful instability information of mobile hosts and helps in managing replicas. This paper introduces a new concept as Node Interactivity Self Assessment (NISA), or more conceptual sociability, which presents the percentage of relative movement for each mobile host and can be obtained by considering the diversity of incoming requests from the nodes in the vicinity. Using this concept, we propose a novel replication technique. Simulation results show that by this technique, we achieve a balanced energy consumption as well as high data accessibility among the nodes in the network.This idea comes and being extended from the simple phrase that the more interactive you are, the more knowledge you have!","PeriodicalId":445284,"journal":{"name":"2009 International Conference on Information Management and Engineering","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134472490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The expanded use of new information technologies has significantly affected both the initiation and the maintenance of professional education. This trend is especially valuable for relatively isolated countries such as Taiwan, where architectural education could be vastly improved by encouraging regular interaction with faculties and practicing professionals outside the country. New, far-reaching information and communication technologies (ICTs) can facilitate such exchanges. This article 1) explores how ICTs hold a promise of transforming the process of professional education in architecture; 2) examines the issues and difficulties of implementing ICT in the teaching of professional architecture, and 3) discusses how instructional technology and school reform can, under the right conditions, become mutually reinforcing partners in supporting student learning, specifically in rapidly developing nations like Taiwan.. Particular attention is given to the increased potential for collaborative work that crosses international and cultural boundaries, molding studies and exercises to the interests of students and teachers rather knowledge that has recently evolved, and how this maximized use will benefit architectural education.
{"title":"Rethinking Teaching: How ICTs Can Positively Impact Education in Architecture","authors":"Tsungjuang Wang","doi":"10.1109/ICIME.2009.36","DOIUrl":"https://doi.org/10.1109/ICIME.2009.36","url":null,"abstract":"The expanded use of new information technologies has significantly affected both the initiation and the maintenance of professional education. This trend is especially valuable for relatively isolated countries such as Taiwan, where architectural education could be vastly improved by encouraging regular interaction with faculties and practicing professionals outside the country. New, far-reaching information and communication technologies (ICTs) can facilitate such exchanges. This article 1) explores how ICTs hold a promise of transforming the process of professional education in architecture; 2) examines the issues and difficulties of implementing ICT in the teaching of professional architecture, and 3) discusses how instructional technology and school reform can, under the right conditions, become mutually reinforcing partners in supporting student learning, specifically in rapidly developing nations like Taiwan.. Particular attention is given to the increased potential for collaborative work that crosses international and cultural boundaries, molding studies and exercises to the interests of students and teachers rather knowledge that has recently evolved, and how this maximized use will benefit architectural education.","PeriodicalId":445284,"journal":{"name":"2009 International Conference on Information Management and Engineering","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129472849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
From the fact that ontologies can help in making sense of huge amount of content, this paper proposes a case study for building ontology via set of rules generated by rule-based learning system. The proposed algorithm utilises the extracted and representative rules generated from the original dataset in developing ontology elements. The proposed algorithm is applied to a well known dataset in the breast cancer domain. The results are encouraging and support the potential role that this approach can play in providing a suitable starting point for ontology development.
{"title":"New Algorithm for Building Ontology from Existing Rules: A Case Study","authors":"Faten F. Kharbat, Haya Ghalayini","doi":"10.1109/ICIME.2009.16","DOIUrl":"https://doi.org/10.1109/ICIME.2009.16","url":null,"abstract":"From the fact that ontologies can help in making sense of huge amount of content, this paper proposes a case study for building ontology via set of rules generated by rule-based learning system. The proposed algorithm utilises the extracted and representative rules generated from the original dataset in developing ontology elements. The proposed algorithm is applied to a well known dataset in the breast cancer domain. The results are encouraging and support the potential role that this approach can play in providing a suitable starting point for ontology development.","PeriodicalId":445284,"journal":{"name":"2009 International Conference on Information Management and Engineering","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132412875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}