B. Boutra, L. Aoudjit, F. Madjene, A. Sebti, H. Lebik, S. Igoud
Water availability and quality represent a major challenge facing water scarcity and pollution. The United Nations Organization predicts that 44% of the world population will live a severe water scarcity, in 2050. Countries located in sub-humid and semi-arid regions of the world will be especially concerned by this problematic. Water disinfection by Ultraviolet (UV) radiation is a rising technique in water treatments considering its simplicity and the weak risk of toxic byproducts formation. When UV radiation is absorbed by the cells of microorganisms, it damages the genetic material (DNA) within the cell in such a way that the organisms are no longer able to grow or reproduce, thus preventing the human illness cryptosporidiosis. DNA damage mainly results from irradiation at wavelengths within the UV-C region of the spectrum (200-280 nm) and is maximised at around 254 nm. This is the principle by which UV is used for disinfection. Time required to achieve a total disinfection depends both on the quality of water and the intensity of irradiation. Effective disinfection can be provided by a suitable intensity and duration of UV radiation to give a UV "dose" usually expressed in mJ/cm2 (= mW second/cm2, the product of UV intensity in mW/cm2 and contact time in seconds). The target dose will depend on the application, but a dose of 40 mJ/cm2 is commonly used for UV disinfection systems, validated for the broad spectrum inactivation of possible waterborne pathogens such as bacteria, viruses and protozoan parasites such as Cryptosporidium. The objective of this study is to present the state of art of different methods of intensity measurement and modeling, and the use of MPSS model to evaluate the UV intensity distribution inside the photoreactor.
{"title":"Measurement and modeling of UV intensity inside a photoreactor for wastewater treatment","authors":"B. Boutra, L. Aoudjit, F. Madjene, A. Sebti, H. Lebik, S. Igoud","doi":"10.1145/2832987.2833008","DOIUrl":"https://doi.org/10.1145/2832987.2833008","url":null,"abstract":"Water availability and quality represent a major challenge facing water scarcity and pollution. The United Nations Organization predicts that 44% of the world population will live a severe water scarcity, in 2050. Countries located in sub-humid and semi-arid regions of the world will be especially concerned by this problematic. Water disinfection by Ultraviolet (UV) radiation is a rising technique in water treatments considering its simplicity and the weak risk of toxic byproducts formation. When UV radiation is absorbed by the cells of microorganisms, it damages the genetic material (DNA) within the cell in such a way that the organisms are no longer able to grow or reproduce, thus preventing the human illness cryptosporidiosis. DNA damage mainly results from irradiation at wavelengths within the UV-C region of the spectrum (200-280 nm) and is maximised at around 254 nm. This is the principle by which UV is used for disinfection. Time required to achieve a total disinfection depends both on the quality of water and the intensity of irradiation. Effective disinfection can be provided by a suitable intensity and duration of UV radiation to give a UV \"dose\" usually expressed in mJ/cm2 (= mW second/cm2, the product of UV intensity in mW/cm2 and contact time in seconds). The target dose will depend on the application, but a dose of 40 mJ/cm2 is commonly used for UV disinfection systems, validated for the broad spectrum inactivation of possible waterborne pathogens such as bacteria, viruses and protozoan parasites such as Cryptosporidium. The objective of this study is to present the state of art of different methods of intensity measurement and modeling, and the use of MPSS model to evaluate the UV intensity distribution inside the photoreactor.","PeriodicalId":416001,"journal":{"name":"Proceedings of the The International Conference on Engineering & MIS 2015","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124371225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Throughout the software evolution, several maintenance actions such as adding new features, fixing problems, improving the design might negatively or positively affect the software design quality. Quality degradation, if not handled in the right time, can accumulate and cause serious problems for future maintenance effort. In this work, we study the modularity evolution of two open-source systems by answering two main research questions namely: what measures can be used to measure the modularity level of software and secondly, did the modularity level for the selected open source software improves over time. By investigating the modularity measures, we have identified the main measures that can be used to measure software modularity. Based on our analysis, the modularity of these two systems is not improving over time.
{"title":"Modularity Measurement and Evolution in Object-Oriented Open-Source Projects","authors":"Mamdouh Alenezi, M. Zarour","doi":"10.1145/2832987.2833013","DOIUrl":"https://doi.org/10.1145/2832987.2833013","url":null,"abstract":"Throughout the software evolution, several maintenance actions such as adding new features, fixing problems, improving the design might negatively or positively affect the software design quality. Quality degradation, if not handled in the right time, can accumulate and cause serious problems for future maintenance effort. In this work, we study the modularity evolution of two open-source systems by answering two main research questions namely: what measures can be used to measure the modularity level of software and secondly, did the modularity level for the selected open source software improves over time. By investigating the modularity measures, we have identified the main measures that can be used to measure software modularity. Based on our analysis, the modularity of these two systems is not improving over time.","PeriodicalId":416001,"journal":{"name":"Proceedings of the The International Conference on Engineering & MIS 2015","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124619168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Energy harvesting from ambient vibrations provides a clean and regenerative solution for powering autonomous sensors which have been widely utilized in numerous practical applications including general, medical and military industries. In recent years, numerous research works on energy harvesting have been carried out using piezoelectric, electromagnetic, electrostatic and thermoelectric mechanisms. In this work, we have proposed a micro piezoelectric energy harvester with a wide operating frequency range. The device consists of a movable circular-mass with three sets of double-layer aluminum coils, a circular-ring which incorporates a permanent magnet and a supporting beam. The harvester is capable of harnessing energy at multiple vibration modes with various 3-dimensional (3-D) excitation frequencies. The 3-D dynamic behavior and performance of the device show that the first vibration mode is an out-of-plane motion, while the second and third modes are in-plane motion at angles of 60° and 150° respectively to the horizontal axis. For a specific excitation acceleration, maximum power densities can be achieved at different 3-D vibration modes. Experimental results obtained show good agreement with that of simulated and the results indicate a good potential for the device to be developed into a practical tool for harnessing energy at multiple 3-D vibration modes.
{"title":"Development of a micro energy harvester using multiple vibration modes","authors":"C. Tay, C. Quan, Chengkuo Lee, Hongwei Liu","doi":"10.1145/2832987.2833045","DOIUrl":"https://doi.org/10.1145/2832987.2833045","url":null,"abstract":"Energy harvesting from ambient vibrations provides a clean and regenerative solution for powering autonomous sensors which have been widely utilized in numerous practical applications including general, medical and military industries. In recent years, numerous research works on energy harvesting have been carried out using piezoelectric, electromagnetic, electrostatic and thermoelectric mechanisms. In this work, we have proposed a micro piezoelectric energy harvester with a wide operating frequency range. The device consists of a movable circular-mass with three sets of double-layer aluminum coils, a circular-ring which incorporates a permanent magnet and a supporting beam. The harvester is capable of harnessing energy at multiple vibration modes with various 3-dimensional (3-D) excitation frequencies. The 3-D dynamic behavior and performance of the device show that the first vibration mode is an out-of-plane motion, while the second and third modes are in-plane motion at angles of 60° and 150° respectively to the horizontal axis. For a specific excitation acceleration, maximum power densities can be achieved at different 3-D vibration modes. Experimental results obtained show good agreement with that of simulated and the results indicate a good potential for the device to be developed into a practical tool for harnessing energy at multiple 3-D vibration modes.","PeriodicalId":416001,"journal":{"name":"Proceedings of the The International Conference on Engineering & MIS 2015","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126387057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Solving scheduling problems with a single criterion is considered unsatisfactory for real-world applications. Therefore, more attention has been given to multiple objective scheduling problems. In this paper, we use a genetic algorithms to solve job shop scheduling problems with alternative routes and dynamic job arrival in order to simultaneously minimize the maximum lateness and makespan. Firstly, genetic algorithms have been applied to find a set of optimum feasible solutions for the makespan criterion. Individuals or solutions with values less than or equal to the value of maximum lateness with minimum makespan are then used to form the initial population in genetic algorithms for the second criterion in order to minimize the maximum lateness. A method of finding non-dominated solutions is then proposed, and weighted-sum is used to find the most desirable solution based on the weight of each criteria. Finally the model is tested using different instances, with the obtained results demonstrating the effectiveness of the proposed method to solve bicriteria dynamic job shop scheduling problems with alternative routes.
{"title":"Genetic Algorithms for Solving Bicriteria Dynamic Job Shop Scheduling Problems with Alternative Routes","authors":"Abdalla Ali, P. Hackney, David Bell, M. Birkett","doi":"10.1145/2832987.2833038","DOIUrl":"https://doi.org/10.1145/2832987.2833038","url":null,"abstract":"Solving scheduling problems with a single criterion is considered unsatisfactory for real-world applications. Therefore, more attention has been given to multiple objective scheduling problems. In this paper, we use a genetic algorithms to solve job shop scheduling problems with alternative routes and dynamic job arrival in order to simultaneously minimize the maximum lateness and makespan. Firstly, genetic algorithms have been applied to find a set of optimum feasible solutions for the makespan criterion. Individuals or solutions with values less than or equal to the value of maximum lateness with minimum makespan are then used to form the initial population in genetic algorithms for the second criterion in order to minimize the maximum lateness. A method of finding non-dominated solutions is then proposed, and weighted-sum is used to find the most desirable solution based on the weight of each criteria. Finally the model is tested using different instances, with the obtained results demonstrating the effectiveness of the proposed method to solve bicriteria dynamic job shop scheduling problems with alternative routes.","PeriodicalId":416001,"journal":{"name":"Proceedings of the The International Conference on Engineering & MIS 2015","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126814830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modeling is of a great importance in developing fuel cell technology with less possible expenses. A good understanding of chemical, physical and mechanical processes of the operating system and related equations are needed to model fuel cells, which are hard to determine in many cases. Artificial intelligence (AI) is a choice to overcome this difficulty instead of costly experiments. AI systems such as artificial neural networks (ANNs) have been employed to solve, predict and optimize the engineering problems in the last decade. In the present study, capabilities of ANN to predict the performance of proton exchange membrane fuel cell (PEMFC) considering the cathode electrocatlyst layer degradation is investigated. Experimental data are utilized for training and testing the networks. Current density, temperature, humidity, number of potential cycles, Platinum load and fuel/oxidant flow rates, potential cycle time step are considered as the inputs and the cell potential, Platinum mass loss percentage of the cathode and location of Platinum particles, which are diffused into membrane and deposited there, are regarded as outputs of ANNs. Back propagation (BP) algorithm has been used to train the network. It is observed that when the networks tuned finely, the obtained results from modeling are in good agreement with the experimental data and achieved responses of ANN are acceptable.
{"title":"Modeling of Cathode Pt /C Electrocatalyst Degradation and Performance of a PEMFC using Artificial Neural Network","authors":"Nasim Maleki, E. Maleki","doi":"10.1145/2832987.2833000","DOIUrl":"https://doi.org/10.1145/2832987.2833000","url":null,"abstract":"Modeling is of a great importance in developing fuel cell technology with less possible expenses. A good understanding of chemical, physical and mechanical processes of the operating system and related equations are needed to model fuel cells, which are hard to determine in many cases. Artificial intelligence (AI) is a choice to overcome this difficulty instead of costly experiments. AI systems such as artificial neural networks (ANNs) have been employed to solve, predict and optimize the engineering problems in the last decade. In the present study, capabilities of ANN to predict the performance of proton exchange membrane fuel cell (PEMFC) considering the cathode electrocatlyst layer degradation is investigated. Experimental data are utilized for training and testing the networks. Current density, temperature, humidity, number of potential cycles, Platinum load and fuel/oxidant flow rates, potential cycle time step are considered as the inputs and the cell potential, Platinum mass loss percentage of the cathode and location of Platinum particles, which are diffused into membrane and deposited there, are regarded as outputs of ANNs. Back propagation (BP) algorithm has been used to train the network. It is observed that when the networks tuned finely, the obtained results from modeling are in good agreement with the experimental data and achieved responses of ANN are acceptable.","PeriodicalId":416001,"journal":{"name":"Proceedings of the The International Conference on Engineering & MIS 2015","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115868537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Symmetric cryptography is used in wireless networks, with the help of session keys to make it used for the public. Keys that are randomly generated to provide security are called as Session key and this is the encryption and decryption key to establish the communication between a user and another computer. However, using session keys, the memory can be managed dynamically by different encryption and decryption techniques. When memory gets overloaded with many requests, then dynamically memory is to be allocated with the help of session keys and gets connected to the external devices. By performing the dynamic operations memory can be managed and users can get uninterrupted process in accessing the network. In this paper, we show that with the help of session keys we can provide the authentication to request handler devices. By this memory can be handled very easily and the congestion can be controlled with the help of session keys.
{"title":"A Session Key Utilization Based Approach For Memory Management in Wireless Networks","authors":"Arun Nagaraja, Saravana Kumar","doi":"10.1145/2832987.2833065","DOIUrl":"https://doi.org/10.1145/2832987.2833065","url":null,"abstract":"Symmetric cryptography is used in wireless networks, with the help of session keys to make it used for the public. Keys that are randomly generated to provide security are called as Session key and this is the encryption and decryption key to establish the communication between a user and another computer. However, using session keys, the memory can be managed dynamically by different encryption and decryption techniques. When memory gets overloaded with many requests, then dynamically memory is to be allocated with the help of session keys and gets connected to the external devices. By performing the dynamic operations memory can be managed and users can get uninterrupted process in accessing the network. In this paper, we show that with the help of session keys we can provide the authentication to request handler devices. By this memory can be handled very easily and the congestion can be controlled with the help of session keys.","PeriodicalId":416001,"journal":{"name":"Proceedings of the The International Conference on Engineering & MIS 2015","volume":"2011 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128061260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Refactoring is a maintenance task that aims at improving the quality of a software source code by restructuring it without altering its external behavior. Identifying refactoring opportunities by manually inspecting and analyzing the source code of the system under consideration is a time consuming and costly process. Researchers, in this area, typically introduce fully or semi-automated techniques to determine or predict refactoring candidates and they report related evaluation studies. The quality of the performed studies has a great impact on the accuracy of the obtained results. In this paper, we demonstrate an application for a proposed framework that evaluates published primary studies (PSs) on refactoring prediction/identification techniques. The framework is applied on 47 selected PSs to evaluate the quality of the studies based on their design, conduct, analysis, and conclusion. We used the results to comment on the weaknesses of the existing PSs and the issues that have to be considered with more attention in future studies.
{"title":"Evaluating Quality of Primary Studies on Determining Object-Oriented Code Refactoring Candidates","authors":"Jehad Al Dallal","doi":"10.1145/2832987.2833026","DOIUrl":"https://doi.org/10.1145/2832987.2833026","url":null,"abstract":"Refactoring is a maintenance task that aims at improving the quality of a software source code by restructuring it without altering its external behavior. Identifying refactoring opportunities by manually inspecting and analyzing the source code of the system under consideration is a time consuming and costly process. Researchers, in this area, typically introduce fully or semi-automated techniques to determine or predict refactoring candidates and they report related evaluation studies. The quality of the performed studies has a great impact on the accuracy of the obtained results. In this paper, we demonstrate an application for a proposed framework that evaluates published primary studies (PSs) on refactoring prediction/identification techniques. The framework is applied on 47 selected PSs to evaluate the quality of the studies based on their design, conduct, analysis, and conclusion. We used the results to comment on the weaknesses of the existing PSs and the issues that have to be considered with more attention in future studies.","PeriodicalId":416001,"journal":{"name":"Proceedings of the The International Conference on Engineering & MIS 2015","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121883577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdalmunam Abdalla, M. Koyuturk, Abdelsalam M. Maatuk, A. O. Mohammed
With the emergence of new data acquisition technologies, large amounts of data are available in many domains. While a significant amount of computational research is dedicated to the analysis of such data, it is needed to be visualized in a way that is easy to analyze and understand. Recently, there have been significant advances in visualizing graphs; however, not enough tools exist for automatic visualization of sets. In this paper, we devise a spectral approach for visualizing overlapping sets, so that the underlying hierarchy and relations of the sets can be easily understood by visual inspection. The algorithm utilizes the spectral decomposition of the graph representation of the sets to compute the best coordinates for all items on the Euclidean plane. The experimental results were very encouraging, and showed positive indication on the efficiency of the proposed method.
{"title":"Sets Visualization using their Graph Representation","authors":"Abdalmunam Abdalla, M. Koyuturk, Abdelsalam M. Maatuk, A. O. Mohammed","doi":"10.1145/2832987.2833023","DOIUrl":"https://doi.org/10.1145/2832987.2833023","url":null,"abstract":"With the emergence of new data acquisition technologies, large amounts of data are available in many domains. While a significant amount of computational research is dedicated to the analysis of such data, it is needed to be visualized in a way that is easy to analyze and understand. Recently, there have been significant advances in visualizing graphs; however, not enough tools exist for automatic visualization of sets. In this paper, we devise a spectral approach for visualizing overlapping sets, so that the underlying hierarchy and relations of the sets can be easily understood by visual inspection. The algorithm utilizes the spectral decomposition of the graph representation of the sets to compute the best coordinates for all items on the Euclidean plane. The experimental results were very encouraging, and showed positive indication on the efficiency of the proposed method.","PeriodicalId":416001,"journal":{"name":"Proceedings of the The International Conference on Engineering & MIS 2015","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123289831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The problem of clustering is NP-Complete. The existing clustering algorithm in literature is the approximate algorithms, which cluster the underlying data differently for different datasets. The K-Means Clustering algorithm is suitable for frequency but not for binary form. When an application runs several system calls are implicitly invoked in the background. Based on these system calls we can predict the normal or abnormal behavior of applications. This can be done by classification. In this paper we tried to perform classification of processes running into normal and abnormal states by using system call behavior. We reduce the system call feature vector by choosing k-means algorithm which uses the proposed measure for dimensionality reduction. We give the design of the proposed measure. The proposed measure has upper and lower bounds which are finite.
{"title":"An approach for Intrusion Detection using Text Mining Techniques","authors":"G. R. Kumar, N. Mangathayaru, G. Narasimha","doi":"10.1145/2832987.2833076","DOIUrl":"https://doi.org/10.1145/2832987.2833076","url":null,"abstract":"The problem of clustering is NP-Complete. The existing clustering algorithm in literature is the approximate algorithms, which cluster the underlying data differently for different datasets. The K-Means Clustering algorithm is suitable for frequency but not for binary form. When an application runs several system calls are implicitly invoked in the background. Based on these system calls we can predict the normal or abnormal behavior of applications. This can be done by classification. In this paper we tried to perform classification of processes running into normal and abnormal states by using system call behavior. We reduce the system call feature vector by choosing k-means algorithm which uses the proposed measure for dimensionality reduction. We give the design of the proposed measure. The proposed measure has upper and lower bounds which are finite.","PeriodicalId":416001,"journal":{"name":"Proceedings of the The International Conference on Engineering & MIS 2015","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130722369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The problem of mining frequent patterns in non-temporal databases is studied extensively. Conventional frequent pattern algorithms are not applicable to find temporal frequent items from the temporal databases. Given a reference support time sequence, the problem of mining similar temporal association patterns has been the current interest among the researchers working in the area of temporal databases. The main objective of this research is to propose and validate the suitability of Gaussian distribution based dissimilarity measure to find similar and dissimilar temporal association patterns of interest. The measure designed serves as similarity measure for finding the similar temporal association patterns. Finally, in this research, we consider the problem of mining similarity profiled temporal patterns from the set of time stamped transaction of temporal database using proposed measure. We show using a case study how the proposed dissimilarity measure may be used to find the temporal frequent patterns and compare the same with the work carried in the literature. The proposed measure has the fixed lower bound and upper bound values as 0 and 1 respectively which is its advantage as compared to Euclidean distance measure which has no fixed upper bound.
{"title":"An Approach for Mining Similarity Profiled Temporal Association Patterns Using Gaussian Based Dissimilarity Measure","authors":"V. Radhakrishna, P. Kumar, V. Janaki","doi":"10.1145/2832987.2833069","DOIUrl":"https://doi.org/10.1145/2832987.2833069","url":null,"abstract":"The problem of mining frequent patterns in non-temporal databases is studied extensively. Conventional frequent pattern algorithms are not applicable to find temporal frequent items from the temporal databases. Given a reference support time sequence, the problem of mining similar temporal association patterns has been the current interest among the researchers working in the area of temporal databases. The main objective of this research is to propose and validate the suitability of Gaussian distribution based dissimilarity measure to find similar and dissimilar temporal association patterns of interest. The measure designed serves as similarity measure for finding the similar temporal association patterns. Finally, in this research, we consider the problem of mining similarity profiled temporal patterns from the set of time stamped transaction of temporal database using proposed measure. We show using a case study how the proposed dissimilarity measure may be used to find the temporal frequent patterns and compare the same with the work carried in the literature. The proposed measure has the fixed lower bound and upper bound values as 0 and 1 respectively which is its advantage as compared to Euclidean distance measure which has no fixed upper bound.","PeriodicalId":416001,"journal":{"name":"Proceedings of the The International Conference on Engineering & MIS 2015","volume":"34 21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132828744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}