Pub Date : 2011-12-01DOI: 10.1109/MYSEC.2011.6140724
Namiah Abu Osman, S. Ho, S. Haw
In software engineering, one of the process improvement methods is through empirical studies. We compare several empirical studies on ontology engineering to discuss potential empirical works on our domain of interest, namely the learning process improvement. Subsequently, we conduct an assessment on novice modelers to determine the effects of ontology on subjects' learning process. The p-values indicate that there is a significant difference between the group that is provided with ontology (ontology group) and the group that is not (control group). Furthermore, the mean scores for comprehensiveness and confidence in the ontology group are slightly higher than in the control group. We conclude that the use of ontology may positively contribute to the learning process especially with the learners involved in the ontology engineering process.
{"title":"Empirical approach in ontology engineering: An assessment on novice modelers","authors":"Namiah Abu Osman, S. Ho, S. Haw","doi":"10.1109/MYSEC.2011.6140724","DOIUrl":"https://doi.org/10.1109/MYSEC.2011.6140724","url":null,"abstract":"In software engineering, one of the process improvement methods is through empirical studies. We compare several empirical studies on ontology engineering to discuss potential empirical works on our domain of interest, namely the learning process improvement. Subsequently, we conduct an assessment on novice modelers to determine the effects of ontology on subjects' learning process. The p-values indicate that there is a significant difference between the group that is provided with ontology (ontology group) and the group that is not (control group). Furthermore, the mean scores for comprehensiveness and confidence in the ontology group are slightly higher than in the control group. We conclude that the use of ontology may positively contribute to the learning process especially with the learners involved in the ontology engineering process.","PeriodicalId":137714,"journal":{"name":"2011 Malaysian Conference in Software Engineering","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127501276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MYSEC.2011.6140660
A. Mehmood, D. Jawawi
Model-driven code generation is being increasingly applied to developing software systems as a result of its recognition as an instrument to enhance the produced software. At the same time, aspect-oriented programming languages have come to the mainstream of software development due to their distinctive features to provide better modularization and separation of concerns. As a consequence of this prevalence and recognition of its impact on improving several software quality factors, different approaches have been proposed in literature to generate aspect-oriented model-driven code. This paper provides a comparative review of some existing approaches and discusses important issues and directions in this particular area. The results of this survey indicate aspect-oriented model-driven code generation being a rather immature area. Majority of approaches address structure diagrams only, a fact that limits them to partial code generation. There is a need for research that incorporates behavior diagrams, in order to achieve long term goal of full code generation from aspect-oriented models.
{"title":"A comparative survey of aspect-oriented code generation approaches","authors":"A. Mehmood, D. Jawawi","doi":"10.1109/MYSEC.2011.6140660","DOIUrl":"https://doi.org/10.1109/MYSEC.2011.6140660","url":null,"abstract":"Model-driven code generation is being increasingly applied to developing software systems as a result of its recognition as an instrument to enhance the produced software. At the same time, aspect-oriented programming languages have come to the mainstream of software development due to their distinctive features to provide better modularization and separation of concerns. As a consequence of this prevalence and recognition of its impact on improving several software quality factors, different approaches have been proposed in literature to generate aspect-oriented model-driven code. This paper provides a comparative review of some existing approaches and discusses important issues and directions in this particular area. The results of this survey indicate aspect-oriented model-driven code generation being a rather immature area. Majority of approaches address structure diagrams only, a fact that limits them to partial code generation. There is a need for research that incorporates behavior diagrams, in order to achieve long term goal of full code generation from aspect-oriented models.","PeriodicalId":137714,"journal":{"name":"2011 Malaysian Conference in Software Engineering","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131715587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MYSEC.2011.6140650
N. A. Bakar, A. Selamat
During requirement specification process of software development activities, many existing systems or business process requirements have been captured using natural language or specialized tools such as UML. However, the capturing of informal requirements into formalized properties has not been taken into attention by software developers due to time and budget constraints. It is critical for the informally captured requirements to be formally specified in order to perform system verification. Formal verification checks whether a system model meets the formal specifications while validation checks whether the developed system fulfills its intended purpose. Therefore, in this paper, we present our studies of formal verification of multi agent system using model checking approach. We have utilized model checking tool in order to execute the formal verification procedures based on a particular basic theory to verify certain kind of properties of requirement specifications. We show an example of how model checking tool could support the verification of Universiti Teknologi Malaysia (UTM) multi agent online application system and conclude that the propose model checking approach will benefit multi agent system.
{"title":"Analyzing model checking approach for multi agent system verification","authors":"N. A. Bakar, A. Selamat","doi":"10.1109/MYSEC.2011.6140650","DOIUrl":"https://doi.org/10.1109/MYSEC.2011.6140650","url":null,"abstract":"During requirement specification process of software development activities, many existing systems or business process requirements have been captured using natural language or specialized tools such as UML. However, the capturing of informal requirements into formalized properties has not been taken into attention by software developers due to time and budget constraints. It is critical for the informally captured requirements to be formally specified in order to perform system verification. Formal verification checks whether a system model meets the formal specifications while validation checks whether the developed system fulfills its intended purpose. Therefore, in this paper, we present our studies of formal verification of multi agent system using model checking approach. We have utilized model checking tool in order to execute the formal verification procedures based on a particular basic theory to verify certain kind of properties of requirement specifications. We show an example of how model checking tool could support the verification of Universiti Teknologi Malaysia (UTM) multi agent online application system and conclude that the propose model checking approach will benefit multi agent system.","PeriodicalId":137714,"journal":{"name":"2011 Malaysian Conference in Software Engineering","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133507407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MYSEC.2011.6140671
M. Mohammadi, M. Mukhtar
Generally, the Supply Chain Management (SCM) is not good enough to respond quickly to the changes in the fast evolving business environment. Hence it is necessary to quickly create business process to respond to rapid changes in the marketplace. In order to accomplish this, the Business Process and Service-oriented Architecture (SOA) must join hands together. The SOA offers a perfect platform to rapidly create a business model with the pace of business needs. This paper aims to examine the features of service and business process based on SOA and to encompass their relationships.
{"title":"SOA-based business process for Supply Chain Management","authors":"M. Mohammadi, M. Mukhtar","doi":"10.1109/MYSEC.2011.6140671","DOIUrl":"https://doi.org/10.1109/MYSEC.2011.6140671","url":null,"abstract":"Generally, the Supply Chain Management (SCM) is not good enough to respond quickly to the changes in the fast evolving business environment. Hence it is necessary to quickly create business process to respond to rapid changes in the marketplace. In order to accomplish this, the Business Process and Service-oriented Architecture (SOA) must join hands together. The SOA offers a perfect platform to rapidly create a business model with the pace of business needs. This paper aims to examine the features of service and business process based on SOA and to encompass their relationships.","PeriodicalId":137714,"journal":{"name":"2011 Malaysian Conference in Software Engineering","volume":"238 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133685302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MYSEC.2011.6140654
Furkh Zeshan, R. Mohamad
The ability to predict reliability of the software during its architectural design not only helps in saving cost but also helps to improve its reliability. With the growing size and complexity of software applications researchers have focused that how to get software reliability through its architecture. Architectural design phase is the stage where one can evaluate either the developed software will fulfill the requirements or not; that's why a highly reliable method is required to analyze and predict the software's architectural reliability. Reliability prediction at architecture level is a challenging task because the architecture reliability depends on the reliability of the individual component, their size, complexity, implemented technology and the interaction among the components. In this paper we have compared the existing reliability prediction models based on our criteria. The purpose is to discover that which one is the best and what is the shortcoming of these models. We also have suggested the research activities needed to overcome these shortcomings.
{"title":"Software architecture reliability prediction models: An overview","authors":"Furkh Zeshan, R. Mohamad","doi":"10.1109/MYSEC.2011.6140654","DOIUrl":"https://doi.org/10.1109/MYSEC.2011.6140654","url":null,"abstract":"The ability to predict reliability of the software during its architectural design not only helps in saving cost but also helps to improve its reliability. With the growing size and complexity of software applications researchers have focused that how to get software reliability through its architecture. Architectural design phase is the stage where one can evaluate either the developed software will fulfill the requirements or not; that's why a highly reliable method is required to analyze and predict the software's architectural reliability. Reliability prediction at architecture level is a challenging task because the architecture reliability depends on the reliability of the individual component, their size, complexity, implemented technology and the interaction among the components. In this paper we have compared the existing reliability prediction models based on our criteria. The purpose is to discover that which one is the best and what is the shortcoming of these models. We also have suggested the research activities needed to overcome these shortcomings.","PeriodicalId":137714,"journal":{"name":"2011 Malaysian Conference in Software Engineering","volume":"510 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116033405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MYSEC.2011.6140694
M. Ashraf, Masitah Ghazali
Embedded systems are proliferating in vast application areas of life with ever increasing multifarious functionalities. Due to the focus of research and development on growing software issues, the naturalness of physical interface remains neglected resulting in interaction complexities for the user. In this work we investigate the complexities of three embedded systems including; washing machine; camera; and MP3 player according to the principles of physicality. By assigning quantitative values to each physicality principle, it is evident from the analysis that inverse action and compliant interaction are two powerful principles that if applied properly augment the natural interaction with the device. As the ubiquitous computing is knocking at the market doors, it is significant enough for the embedded system developers to incorporate the natural interaction capabilities in every day embedded devices by studying, discovering, and reducing the complexities of physical user interfaces.
{"title":"Investigating physical interaction complexities in embedded systems","authors":"M. Ashraf, Masitah Ghazali","doi":"10.1109/MYSEC.2011.6140694","DOIUrl":"https://doi.org/10.1109/MYSEC.2011.6140694","url":null,"abstract":"Embedded systems are proliferating in vast application areas of life with ever increasing multifarious functionalities. Due to the focus of research and development on growing software issues, the naturalness of physical interface remains neglected resulting in interaction complexities for the user. In this work we investigate the complexities of three embedded systems including; washing machine; camera; and MP3 player according to the principles of physicality. By assigning quantitative values to each physicality principle, it is evident from the analysis that inverse action and compliant interaction are two powerful principles that if applied properly augment the natural interaction with the device. As the ubiquitous computing is knocking at the market doors, it is significant enough for the embedded system developers to incorporate the natural interaction capabilities in every day embedded devices by studying, discovering, and reducing the complexities of physical user interfaces.","PeriodicalId":137714,"journal":{"name":"2011 Malaysian Conference in Software Engineering","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134107183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MYSEC.2011.6140709
Sea Chong Seak, Ng Kang Siong
XML is the de-facto language of business transaction and widely used as a standard format to exchange electronic documents and messages. The most popular technology about the XML is the feature of structuring data and the XML based encryption in a natural way to handle complex requirement for securing XML data flow and exchange between applications. In this paper, we present the implementation of XML encryption utilizing Public Key Infrastructure (PKI) technology compliance with W3C's working draft for XML encryption.
{"title":"A file-based implementation of XML encryption","authors":"Sea Chong Seak, Ng Kang Siong","doi":"10.1109/MYSEC.2011.6140709","DOIUrl":"https://doi.org/10.1109/MYSEC.2011.6140709","url":null,"abstract":"XML is the de-facto language of business transaction and widely used as a standard format to exchange electronic documents and messages. The most popular technology about the XML is the feature of structuring data and the XML based encryption in a natural way to handle complex requirement for securing XML data flow and exchange between applications. In this paper, we present the implementation of XML encryption utilizing Public Key Infrastructure (PKI) technology compliance with W3C's working draft for XML encryption.","PeriodicalId":137714,"journal":{"name":"2011 Malaysian Conference in Software Engineering","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127096639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MYSEC.2011.6140705
P. Anthony, Ooi Yu Wooi
In this work, we describe a Calendar Agent that uses Commonsense reasoning to generate sub-tasks based on the events that are entered by the user in the calendar. In order to generate the relevant sub-tasks of an event, we utilize ConceptNet which has 250,000 elements of commonsense knowledge. Our calendar agent recommends the best time slot for a given event based on the profile and the behavior of the user. This calendar agent was developed based on Prometheus Methodology using Prometheus Design Tool. Among the features of the calendar agent is its ability to solve timing conflict, filling in the event entry automatically based on the given information and generating sub-tasks that the user has to complete prior to the event. The calendar agent also tries to learn the behavior of the users to provide better recommendation.
{"title":"Calendar agent with commonsense reasoning","authors":"P. Anthony, Ooi Yu Wooi","doi":"10.1109/MYSEC.2011.6140705","DOIUrl":"https://doi.org/10.1109/MYSEC.2011.6140705","url":null,"abstract":"In this work, we describe a Calendar Agent that uses Commonsense reasoning to generate sub-tasks based on the events that are entered by the user in the calendar. In order to generate the relevant sub-tasks of an event, we utilize ConceptNet which has 250,000 elements of commonsense knowledge. Our calendar agent recommends the best time slot for a given event based on the profile and the behavior of the user. This calendar agent was developed based on Prometheus Methodology using Prometheus Design Tool. Among the features of the calendar agent is its ability to solve timing conflict, filling in the event entry automatically based on the given information and generating sub-tasks that the user has to complete prior to the event. The calendar agent also tries to learn the behavior of the users to provide better recommendation.","PeriodicalId":137714,"journal":{"name":"2011 Malaysian Conference in Software Engineering","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128807276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MYSEC.2011.6140704
Z. Hamid, I. Musirin, Muhammad Murtadha Othman, M. Rahim
In deregulated power system, the only way to provide fair and non-discriminatory transmission service pricing is by applying electricity tracing methods. The weakness of conventional methods like transaction based allocation has resulted to inaccurate transmission cost allocation as the physical power system constraints are not taken into account; whereas the proportional sharing principle (PSP) based power tracing techniques are very dependent on matrix inversion when obtaining the results. Hence, this paper presents an effective and new formulation technique for load tracing via a new hybrid algorithm; Blended Crossover Continuous Ant Colony Optimization (BX-CACO). This method is new, flexible and easy to be implemented as no complex mathematical derivations are needed and it is definitely free from PSP and matrix inversion. Validation on IEEE 14-bus test system has proven that BX-CACO reflects a great capability in being a sophisticated tool for fair losses and reactive power allocation with fast computation time.
{"title":"Reactive power load tracing via blended crossover continuous ant colony optimization","authors":"Z. Hamid, I. Musirin, Muhammad Murtadha Othman, M. Rahim","doi":"10.1109/MYSEC.2011.6140704","DOIUrl":"https://doi.org/10.1109/MYSEC.2011.6140704","url":null,"abstract":"In deregulated power system, the only way to provide fair and non-discriminatory transmission service pricing is by applying electricity tracing methods. The weakness of conventional methods like transaction based allocation has resulted to inaccurate transmission cost allocation as the physical power system constraints are not taken into account; whereas the proportional sharing principle (PSP) based power tracing techniques are very dependent on matrix inversion when obtaining the results. Hence, this paper presents an effective and new formulation technique for load tracing via a new hybrid algorithm; Blended Crossover Continuous Ant Colony Optimization (BX-CACO). This method is new, flexible and easy to be implemented as no complex mathematical derivations are needed and it is definitely free from PSP and matrix inversion. Validation on IEEE 14-bus test system has proven that BX-CACO reflects a great capability in being a sophisticated tool for fair losses and reactive power allocation with fast computation time.","PeriodicalId":137714,"journal":{"name":"2011 Malaysian Conference in Software Engineering","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127765078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MYSEC.2011.6140691
S. Nandagopalan, Sudarshan TSB, B. Adiga, C. Dhanalakshmi
Content Based Image Retrieval (CBIR) is the application of computer vision techniques to retrieve the most visually similar images from the image database for any given query image. The visual characteristics of a disease carry diagnostic information and oftentimes visually similar images correspond to the same disease category. In this paper we aim at building an efficient Content Based Echo Image Retrieval (CBEIR) system for 2D Echo (2DE) and Color Doppler Flow (CDF) image modalities. From 2DE images, features such as dimensions of cardiac chambers (area, volume, ejection fraction, etc) are extracted; whereas texture properties, kurtosis, skewness, edge gradient, color histogram, etc., are extracted from CDF images. Hence, this forms a multi-feature descriptor which then is used to retrieve similar images from the database. Some of the major contributions of our work are: modified K-Means segmentation algorithm coupled with PL/SQL and External Procedures to achieve speed, accurate detection of cardiac chambers using active contour model, efficient method to extract color segment from CDF images, and flexible multifeature model. These domain specific low-level features are very important to build a reliable and scalable CBIR model. The feature database is a set of quantitative and qualitative features of the images. Our image database is populated with diverse set of approximately 623 images of normal and abnormal patients acquired from a local cardiology Hospital. Exhaustive experimentation has been conducted with various input query images and combinations of features to compute the retrieval efficiency which are validated by domain experts. It has been shown through Recall-Precision graphs that the proposed method outperforms compared to others reported in the past.
基于内容的图像检索(Content Based Image Retrieval, CBIR)是利用计算机视觉技术从图像数据库中检索出任意给定查询图像中视觉上最相似的图像。疾病的视觉特征携带诊断信息,通常视觉上相似的图像对应于相同的疾病类别。本文旨在针对二维回波(2DE)和彩色多普勒流(CDF)图像模式构建一个高效的基于内容的回波图像检索(CBEIR)系统。从2DE图像中提取心室尺寸(面积、体积、射血分数等)等特征;而纹理属性、峰度、偏度、边缘梯度、颜色直方图等则从CDF图像中提取。因此,这形成了一个多特征描述符,然后用于从数据库中检索相似的图像。本研究的主要贡献有:改进的K-Means分割算法,结合PL/SQL和External Procedures实现快速、使用活动轮廓模型准确检测心腔、从CDF图像中提取颜色段的高效方法以及灵活的多特征模型。这些领域特定的底层特征对于构建可靠且可扩展的CBIR模型非常重要。特征数据库是图像的定量和定性特征的集合。我们的图像数据库包含了从当地心脏病医院获得的大约623张正常和异常患者的图像。利用各种输入查询图像和特征组合进行穷举实验,计算检索效率,并得到领域专家的验证。通过召回精度图显示,与过去报道的其他方法相比,所提出的方法优于其他方法。
{"title":"Multifeature based retrieval of 2D and Color Doppler Echocardiographic images for clinical decision support","authors":"S. Nandagopalan, Sudarshan TSB, B. Adiga, C. Dhanalakshmi","doi":"10.1109/MYSEC.2011.6140691","DOIUrl":"https://doi.org/10.1109/MYSEC.2011.6140691","url":null,"abstract":"Content Based Image Retrieval (CBIR) is the application of computer vision techniques to retrieve the most visually similar images from the image database for any given query image. The visual characteristics of a disease carry diagnostic information and oftentimes visually similar images correspond to the same disease category. In this paper we aim at building an efficient Content Based Echo Image Retrieval (CBEIR) system for 2D Echo (2DE) and Color Doppler Flow (CDF) image modalities. From 2DE images, features such as dimensions of cardiac chambers (area, volume, ejection fraction, etc) are extracted; whereas texture properties, kurtosis, skewness, edge gradient, color histogram, etc., are extracted from CDF images. Hence, this forms a multi-feature descriptor which then is used to retrieve similar images from the database. Some of the major contributions of our work are: modified K-Means segmentation algorithm coupled with PL/SQL and External Procedures to achieve speed, accurate detection of cardiac chambers using active contour model, efficient method to extract color segment from CDF images, and flexible multifeature model. These domain specific low-level features are very important to build a reliable and scalable CBIR model. The feature database is a set of quantitative and qualitative features of the images. Our image database is populated with diverse set of approximately 623 images of normal and abnormal patients acquired from a local cardiology Hospital. Exhaustive experimentation has been conducted with various input query images and combinations of features to compute the retrieval efficiency which are validated by domain experts. It has been shown through Recall-Precision graphs that the proposed method outperforms compared to others reported in the past.","PeriodicalId":137714,"journal":{"name":"2011 Malaysian Conference in Software Engineering","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126279193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}