Pub Date : 2021-01-01DOI: 10.4018/IJDWM.2021070104
Nitesh Sukhwani, Venkateswara Rao Kagita, Vikas Kumar, S. K. Panda
Skyline recommendation with uncertain preferences has drawn AI researchers' attention in recent years due to its wide range of applications. The naive approach of skyline recommendation computes the skyline probability of all objects and ranks them accordingly. However, in many applications, the interest is in determining top-k objects rather than their ranking. The most efficient algorithm to determine an object's skyline probability employs the concepts of zero-contributing set and prefix-based k-level absorption. The authors show that the performance of these methods highly depends on the arrangement of objects in the database. In this paper, the authors propose a method for determining top-k skyline objects without computing the skyline probability of all the objects. They also propose and analyze different methods of ordering the objects in the database. Finally, they empirically show the efficacy of the proposed approaches on several synthetic and real-world data sets.
{"title":"Efficient Computation of Top-K Skyline Objects in Data Set With Uncertain Preferences","authors":"Nitesh Sukhwani, Venkateswara Rao Kagita, Vikas Kumar, S. K. Panda","doi":"10.4018/IJDWM.2021070104","DOIUrl":"https://doi.org/10.4018/IJDWM.2021070104","url":null,"abstract":"Skyline recommendation with uncertain preferences has drawn AI researchers' attention in recent years due to its wide range of applications. The naive approach of skyline recommendation computes the skyline probability of all objects and ranks them accordingly. However, in many applications, the interest is in determining top-k objects rather than their ranking. The most efficient algorithm to determine an object's skyline probability employs the concepts of zero-contributing set and prefix-based k-level absorption. The authors show that the performance of these methods highly depends on the arrangement of objects in the database. In this paper, the authors propose a method for determining top-k skyline objects without computing the skyline probability of all the objects. They also propose and analyze different methods of ordering the objects in the database. Finally, they empirically show the efficacy of the proposed approaches on several synthetic and real-world data sets.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90233739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.4018/IJDWM.2021010104
I. Jacob, P. Betty, P. Darney, Hoang Viet Long, T. Tuan, Y. H. Robinson, S. Vimal, E. G. Julie
Methods to retrieve images involve retrieving images from the database by using features of it. They are colour, shape, and texture. These features are used to find the similarity for the query image with that of images in the database. The images are sorted in the order with this similarity. The article uses intra- and inter-texture chrominance and its intensity. Here inter-chromatic texture feature is extracted by LOCTP (local oppugnant colored texture pattern). Local binary pattern (LBP) gives the intra-texture information. Histogram of oriented gradient (HoG) is used to get the shape information from the satellite images. The performance analysis is land-cover remote sensing database, NWPU-VHR-10 dataset, and satellite optical land cover database gives better results than the previous works.
{"title":"Image Retrieval Using Intensity Gradients and Texture Chromatic Pattern: Satellite Images Retrieval","authors":"I. Jacob, P. Betty, P. Darney, Hoang Viet Long, T. Tuan, Y. H. Robinson, S. Vimal, E. G. Julie","doi":"10.4018/IJDWM.2021010104","DOIUrl":"https://doi.org/10.4018/IJDWM.2021010104","url":null,"abstract":"Methods to retrieve images involve retrieving images from the database by using features of it. They are colour, shape, and texture. These features are used to find the similarity for the query image with that of images in the database. The images are sorted in the order with this similarity. The article uses intra- and inter-texture chrominance and its intensity. Here inter-chromatic texture feature is extracted by LOCTP (local oppugnant colored texture pattern). Local binary pattern (LBP) gives the intra-texture information. Histogram of oriented gradient (HoG) is used to get the shape information from the satellite images. The performance analysis is land-cover remote sensing database, NWPU-VHR-10 dataset, and satellite optical land cover database gives better results than the previous works.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83343927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.4018/IJDWM.2021040103
Long Giang Nguyen, Le Hoang Son, N. Tuan, T. Ngan, Nguyen Nhu Son, N. Thang
The tolerance rough set model is an effective tool to solve attribute reduction problem directly on incomplete decision systems without pre-processing missing values. In practical applications, incomplete decision systems are often changed and updated, especially in the case of adding or removing attributes. To solve the problem of finding reduct on dynamic incomplete decision systems, researchers have proposed many incremental algorithms to decrease execution time. However, the proposed incremental algorithms are mainly based on filter approach in which classification accuracy was calculated after the reduct has been obtained. As the results, these filter algorithms do not get the best result in term of the number of attributes in reduct and classification accuracy. This paper proposes two distance based filter-wrapper incremental algorithms: the algorithm IFWA_AA in case of adding attributes and the algorithm IFWA_DA in case of deleting attributes. Experimental results show that proposed filter-wrapper incremental algorithm IFWA_AA decreases significantly the number of attributes in reduct and improves classification accuracy compared to filter incremental algorithms such as UARA, IDRA.
{"title":"Filter-Wrapper Incremental Algorithms for Finding Reduct in Incomplete Decision Systems When Adding and Deleting an Attribute Set","authors":"Long Giang Nguyen, Le Hoang Son, N. Tuan, T. Ngan, Nguyen Nhu Son, N. Thang","doi":"10.4018/IJDWM.2021040103","DOIUrl":"https://doi.org/10.4018/IJDWM.2021040103","url":null,"abstract":"The tolerance rough set model is an effective tool to solve attribute reduction problem directly on incomplete decision systems without pre-processing missing values. In practical applications, incomplete decision systems are often changed and updated, especially in the case of adding or removing attributes. To solve the problem of finding reduct on dynamic incomplete decision systems, researchers have proposed many incremental algorithms to decrease execution time. However, the proposed incremental algorithms are mainly based on filter approach in which classification accuracy was calculated after the reduct has been obtained. As the results, these filter algorithms do not get the best result in term of the number of attributes in reduct and classification accuracy. This paper proposes two distance based filter-wrapper incremental algorithms: the algorithm IFWA_AA in case of adding attributes and the algorithm IFWA_DA in case of deleting attributes. Experimental results show that proposed filter-wrapper incremental algorithm IFWA_AA decreases significantly the number of attributes in reduct and improves classification accuracy compared to filter incremental algorithms such as UARA, IDRA.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75470700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.4018/IJDWM.2021040105
S. Chakraborty, Jyotika Doshi
The enterprise data warehouse stores an enormous amount of data collected from multiple sources for analytical processing and strategic decision making. The analytical processing is done using online analytical processing (OLAP) queries where the performance in terms of result retrieval time is an important factor. The major existing approaches for retrieving results from a data warehouse are multidimensional data cubes and materialized views that incur more storage, processing, and maintenance costs. The present study strives to achieve a simpler and faster query result retrieval approach from data warehouse with reduced storage space and minimal maintenance cost. The execution time of frequent queries is saved in the present approach by storing their results for reuse when the query is fired next time. The executed OLAP queries are stored along with the query results and necessary metadata information in a relational database is referred as materialized query database (MQDB). The tables, fields, functions, relational operators, and criteria used in the input query are matched with those of stored query, and if they are found to be same, then the input query and the stored query are considered as a synonymous query. Further, the stored query is checked for incremental updates, and if no incremental updates are required, then the existing stored results are fetched from MQDB. On the other hand, if the stored query requires an incremental update of results, then the processing of only incremental result is considered from data marts. The performance of MQDB model is evaluated by comparing with the developed novel approach, and it is observed that, using MQDB, a significant reduction in query processing time is achieved as compared to the major existing approaches. The developed model will be useful for the organizations keeping their historical records in the data warehouse.
{"title":"An Approach for Retrieving Faster Query Results From Data Warehouse Using Synonymous Materialized Queries","authors":"S. Chakraborty, Jyotika Doshi","doi":"10.4018/IJDWM.2021040105","DOIUrl":"https://doi.org/10.4018/IJDWM.2021040105","url":null,"abstract":"The enterprise data warehouse stores an enormous amount of data collected from multiple sources for analytical processing and strategic decision making. The analytical processing is done using online analytical processing (OLAP) queries where the performance in terms of result retrieval time is an important factor. The major existing approaches for retrieving results from a data warehouse are multidimensional data cubes and materialized views that incur more storage, processing, and maintenance costs. The present study strives to achieve a simpler and faster query result retrieval approach from data warehouse with reduced storage space and minimal maintenance cost. The execution time of frequent queries is saved in the present approach by storing their results for reuse when the query is fired next time. The executed OLAP queries are stored along with the query results and necessary metadata information in a relational database is referred as materialized query database (MQDB). The tables, fields, functions, relational operators, and criteria used in the input query are matched with those of stored query, and if they are found to be same, then the input query and the stored query are considered as a synonymous query. Further, the stored query is checked for incremental updates, and if no incremental updates are required, then the existing stored results are fetched from MQDB. On the other hand, if the stored query requires an incremental update of results, then the processing of only incremental result is considered from data marts. The performance of MQDB model is evaluated by comparing with the developed novel approach, and it is observed that, using MQDB, a significant reduction in query processing time is achieved as compared to the major existing approaches. The developed model will be useful for the organizations keeping their historical records in the data warehouse.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73248408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.4018/IJDWM.2021010101
F. Abdelhédi, A. A. Brahim, G. Zurfluh
Big data have received a great deal of attention in recent years. Not only is the amount of data on a completely different level than before, but also the authors have different type of data including factors such as format, structure, and sources. This has definitely changed the tools one needs to handle big data, giving rise to NoSQL systems. While NoSQL systems have proven their efficiency to handle big data, it's still an unsolved problem how the automatic storage of big data in NoSQL systems could be done. This paper proposes an automatic approach for implementing UML conceptual models in NoSQL systems, including the mapping of the associated OCL constraints to the code required for checking them. In order to demonstrate the practical applicability of the work, this paper has realized it in a tool supporting four fundamental OCL expressions: iterate-based expressions, OCL predefined operations, If expression, and Let expression.
{"title":"OCL Constraints Checking on NoSQL Systems Through an MDA-Based Approach","authors":"F. Abdelhédi, A. A. Brahim, G. Zurfluh","doi":"10.4018/IJDWM.2021010101","DOIUrl":"https://doi.org/10.4018/IJDWM.2021010101","url":null,"abstract":"Big data have received a great deal of attention in recent years. Not only is the amount of data on a completely different level than before, but also the authors have different type of data including factors such as format, structure, and sources. This has definitely changed the tools one needs to handle big data, giving rise to NoSQL systems. While NoSQL systems have proven their efficiency to handle big data, it's still an unsolved problem how the automatic storage of big data in NoSQL systems could be done. This paper proposes an automatic approach for implementing UML conceptual models in NoSQL systems, including the mapping of the associated OCL constraints to the code required for checking them. In order to demonstrate the practical applicability of the work, this paper has realized it in a tool supporting four fundamental OCL expressions: iterate-based expressions, OCL predefined operations, If expression, and Let expression.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77388253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.4018/IJDWM.2021070103
N. Thang, Long Giang Nguyen, Hoang Viet Long, N. Tuan, T. Tuan, Ngo Duy Tan
Attribute reduction is a crucial problem in the process of data mining and knowledge discovery in big data. In incomplete decision systems, the model using tolerance rough set is fundamental to solve the problem by computing the redact to reduce the execution time. However, these proposals used the traditional filter approach so that the reduct was not optimal in the number of attributes and the accuracy of classification. The problem is critical in the dynamic incomplete decision systems which are more appropriate for real-world applications. Therefore, this paper proposes two novel incremental algorithms using the combination of filter and wrapper approach, namely IFWA_ADO and IFWA_DEO, respectively, for the dynamic incomplete decision systems. The IFWA_ADO computes reduct incrementally in cases of adding multiple objects while IFWA_DEO updates reduct when removing multiple objects. These algorithms are also verified on six data sets. Experimental results show that the filter-wrapper algorithms get higher performance than the other filter incremental algorithms.
{"title":"Efficient Algorithms for Dynamic Incomplete Decision Systems","authors":"N. Thang, Long Giang Nguyen, Hoang Viet Long, N. Tuan, T. Tuan, Ngo Duy Tan","doi":"10.4018/IJDWM.2021070103","DOIUrl":"https://doi.org/10.4018/IJDWM.2021070103","url":null,"abstract":"Attribute reduction is a crucial problem in the process of data mining and knowledge discovery in big data. In incomplete decision systems, the model using tolerance rough set is fundamental to solve the problem by computing the redact to reduce the execution time. However, these proposals used the traditional filter approach so that the reduct was not optimal in the number of attributes and the accuracy of classification. The problem is critical in the dynamic incomplete decision systems which are more appropriate for real-world applications. Therefore, this paper proposes two novel incremental algorithms using the combination of filter and wrapper approach, namely IFWA_ADO and IFWA_DEO, respectively, for the dynamic incomplete decision systems. The IFWA_ADO computes reduct incrementally in cases of adding multiple objects while IFWA_DEO updates reduct when removing multiple objects. These algorithms are also verified on six data sets. Experimental results show that the filter-wrapper algorithms get higher performance than the other filter incremental algorithms.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76236229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.4018/ijdwm.2020100107
Waqas Ahmed, E. Zimányi, A. Vaisman, R. Wrembel
Usually, data in data warehouses (DWs) are stored using the notion of the multidimensional (MD) model. Often, DWs change in content and structure due to several reasons, like, for instance, changes in a business scenario or technology. For accurate decision-making, a DW model must allow storing and analyzing time-varying data. This paper addresses the problem of keeping track of the history of the data in a DW. For this, first, a formalization of the traditional MD model is proposed and then extended as a generalized temporal MD model. The model comes equipped with a collection of typical online analytical processing (OLAP) operations with temporal semantics, which is formalized for the four classic operations, namely roll-up, dice, project, and drill-across. Finally, the mapping from the generalized temporal model into a relational schema is presented together with an implementation of the temporal OLAP operations in standard SQL.
{"title":"A Temporal Multidimensional Model and OLAP Operators","authors":"Waqas Ahmed, E. Zimányi, A. Vaisman, R. Wrembel","doi":"10.4018/ijdwm.2020100107","DOIUrl":"https://doi.org/10.4018/ijdwm.2020100107","url":null,"abstract":"Usually, data in data warehouses (DWs) are stored using the notion of the multidimensional (MD) model. Often, DWs change in content and structure due to several reasons, like, for instance, changes in a business scenario or technology. For accurate decision-making, a DW model must allow storing and analyzing time-varying data. This paper addresses the problem of keeping track of the history of the data in a DW. For this, first, a formalization of the traditional MD model is proposed and then extended as a generalized temporal MD model. The model comes equipped with a collection of typical online analytical processing (OLAP) operations with temporal semantics, which is formalized for the four classic operations, namely roll-up, dice, project, and drill-across. Finally, the mapping from the generalized temporal model into a relational schema is presented together with an implementation of the temporal OLAP operations in standard SQL.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75657896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.4018/ijdwm.2020100103
H. Huynh, Le Hoang Son, Cu Nguyen Giap, T. Huynh, H. H. Luong
Recommender systems are becoming increasingly important in every aspect of life for the diverse needs of users. One of the main goals of the recommender system is to make decisions based on criteria. It is thus important to have a reasonable solution that is consistent with user requirements and characteristics of the stored data. This paper proposes a novel recommendation method based on the resonance relationship of user criteria with Choquet Operation for building a decision-making model. It has been evaluated on the multirecsys tool based on R language. Outputs from the proposed model are effective and reliable through the experiments. It can be applied in appropriate contexts to improve efficiency and minimize the limitations of the current recommender systems.
{"title":"Recommender Systems Based on Resonance Relationship of Criteria With Choquet Operation","authors":"H. Huynh, Le Hoang Son, Cu Nguyen Giap, T. Huynh, H. H. Luong","doi":"10.4018/ijdwm.2020100103","DOIUrl":"https://doi.org/10.4018/ijdwm.2020100103","url":null,"abstract":"Recommender systems are becoming increasingly important in every aspect of life for the diverse needs of users. One of the main goals of the recommender system is to make decisions based on criteria. It is thus important to have a reasonable solution that is consistent with user requirements and characteristics of the stored data. This paper proposes a novel recommendation method based on the resonance relationship of user criteria with Choquet Operation for building a decision-making model. It has been evaluated on the multirecsys tool based on R language. Outputs from the proposed model are effective and reliable through the experiments. It can be applied in appropriate contexts to improve efficiency and minimize the limitations of the current recommender systems.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72946115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.4018/ijdwm.2020100102
Noura Azaiez, J. Akaichi
Business Intelligence includes the concept of data warehousing to support decision making. As the ETL process presents the core of the warehousing technology, it is responsible for pulling data out of the source systems and placing it into a data warehouse. Given the technology development in the field of geographical information systems, pervasive systems, and the positioning systems, the traditional warehouse features become unable to handle the mobility aspect integrated in the warehousing chain. Therefore, the trajectory or the mobility data gathered from the mobile object movements have to be managed through what is called the trajectory ELT. For this purpose, the authors emphasize the power of the model-driven architecture approach to achieve the whole transformation task, in this case transforming trajectory data source model that describes the resulting trajectories into trajectory data mart models. The authors illustrate the proposed approach with an epilepsy patient state case study.
{"title":"The Model-Driven Architecture for the Trajectory Data Warehouse Modeling","authors":"Noura Azaiez, J. Akaichi","doi":"10.4018/ijdwm.2020100102","DOIUrl":"https://doi.org/10.4018/ijdwm.2020100102","url":null,"abstract":"Business Intelligence includes the concept of data warehousing to support decision making. As the ETL process presents the core of the warehousing technology, it is responsible for pulling data out of the source systems and placing it into a data warehouse. Given the technology development in the field of geographical information systems, pervasive systems, and the positioning systems, the traditional warehouse features become unable to handle the mobility aspect integrated in the warehousing chain. Therefore, the trajectory or the mobility data gathered from the mobile object movements have to be managed through what is called the trajectory ELT. For this purpose, the authors emphasize the power of the model-driven architecture approach to achieve the whole transformation task, in this case transforming trajectory data source model that describes the resulting trajectories into trajectory data mart models. The authors illustrate the proposed approach with an epilepsy patient state case study.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74381297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.4018/ijdwm.2020100104
V. Janeja, J. Namayanja, Y. Yesha, A. Kench, V. Misal
The analysis of both continuous and categorical attributes generating a heterogeneous mix of attributes poses challenges in data clustering. Traditional clustering techniques like k-means clustering work well when applied to small homogeneous datasets. However, as the data size becomes large, it becomes increasingly difficult to find meaningful and well-formed clusters. In this paper, the authors propose an approach that utilizes a combined similarity function, which looks at similarity across numeric and categorical features and employs this function in a clustering algorithm to identify similarity between data objects. The findings indicate that the proposed approach handles heterogeneous data better by forming well-separated clusters.
{"title":"Discovering Similarity Across Heterogeneous Features: A Case Study of Clinico-Genomic Analysis","authors":"V. Janeja, J. Namayanja, Y. Yesha, A. Kench, V. Misal","doi":"10.4018/ijdwm.2020100104","DOIUrl":"https://doi.org/10.4018/ijdwm.2020100104","url":null,"abstract":"The analysis of both continuous and categorical attributes generating a heterogeneous mix of attributes poses challenges in data clustering. Traditional clustering techniques like k-means clustering work well when applied to small homogeneous datasets. However, as the data size becomes large, it becomes increasingly difficult to find meaningful and well-formed clusters. In this paper, the authors propose an approach that utilizes a combined similarity function, which looks at similarity across numeric and categorical features and employs this function in a clustering algorithm to identify similarity between data objects. The findings indicate that the proposed approach handles heterogeneous data better by forming well-separated clusters.","PeriodicalId":54963,"journal":{"name":"International Journal of Data Warehousing and Mining","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84918911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}