Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8965178
Masoumeh Khubroo, S. J. Mousavirad
The goal of an optimization technique is to find the best solution to an optimization problem. In a single-objective problem, the best solution is the optimal value for the objective function, while in a multi-objective problem, the selection of solutions is not a straightforward task because there are several objective functions which are in conflict. There are many diverse applications such as image processing and data mining, which can be formulated as a multi-objective problem. This paper presents a new decomposition-based multi-objective optimization method using the grey wolf optimizer, which transforms the problem into several sub-problems and examines all the sub-problems simultaneously. Our proposed algorithm obtains the Pareto front using a neighborhood relation among the sub-problems. The levy flight distribution has also been used which increases the exploration and exploitation features in the algorithm in order to improve the search ability. The performance of our proposed algorithm is evaluated on UF family of benchmark functions in terms of different metric such as inverted generational distance (IGD), generational distance (GD), hyper-volume (HV), and spacing (SP). The experimental results indicate the superior performance of the proposed method.
{"title":"A Levy Flight-based Decomposition Multi-objective Optimization Based on Grey Wolf Optimizer","authors":"Masoumeh Khubroo, S. J. Mousavirad","doi":"10.1109/ICCKE48569.2019.8965178","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8965178","url":null,"abstract":"The goal of an optimization technique is to find the best solution to an optimization problem. In a single-objective problem, the best solution is the optimal value for the objective function, while in a multi-objective problem, the selection of solutions is not a straightforward task because there are several objective functions which are in conflict. There are many diverse applications such as image processing and data mining, which can be formulated as a multi-objective problem. This paper presents a new decomposition-based multi-objective optimization method using the grey wolf optimizer, which transforms the problem into several sub-problems and examines all the sub-problems simultaneously. Our proposed algorithm obtains the Pareto front using a neighborhood relation among the sub-problems. The levy flight distribution has also been used which increases the exploration and exploitation features in the algorithm in order to improve the search ability. The performance of our proposed algorithm is evaluated on UF family of benchmark functions in terms of different metric such as inverted generational distance (IGD), generational distance (GD), hyper-volume (HV), and spacing (SP). The experimental results indicate the superior performance of the proposed method.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"74 1","pages":"155-161"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88346784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8965036
H. Amintoosi, Mahdi Nikooghadam
Telecare medical information systems are becoming more and more popular due to the provision of delivering health services, including remote access to health profiles for doctors, staff, and patients. Since these systems are installed entirely on the Internet, they are faced with different security and privacy threats. So, a significant challenge is the establishment of a secure key agreement and authentication procedure between the medical servers and patients. Recently, an ECC-based authentication and key agreement scheme for telecare medical systems in the smart city has been proposed by Khatoon et.al. In this paper, at first, we descriptively analyze Khatoon et al.’s protocol and demonstrate that it is vulnerable against known-session-specific temporary information attacks and cannot satisfy perfect forward secrecy. Next, we propose a provably secure and efficient authentication and key agreement protocol using Elliptic Curve Cryptography (ECC). We informally analyze the security of the proposed protocol, and prove that it can satisfy perfect forward secrecy and resist known attacks such as user/server impersonation attack. We also simulate and formally analyze the security of the protocol using the Scyther tool. The results show its robustness against different types of attacks.
{"title":"A Novel Provably-Secure ECC-based Authentication and Key Management Protocol for Telecare Medical Information Systems","authors":"H. Amintoosi, Mahdi Nikooghadam","doi":"10.1109/ICCKE48569.2019.8965036","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8965036","url":null,"abstract":"Telecare medical information systems are becoming more and more popular due to the provision of delivering health services, including remote access to health profiles for doctors, staff, and patients. Since these systems are installed entirely on the Internet, they are faced with different security and privacy threats. So, a significant challenge is the establishment of a secure key agreement and authentication procedure between the medical servers and patients. Recently, an ECC-based authentication and key agreement scheme for telecare medical systems in the smart city has been proposed by Khatoon et.al. In this paper, at first, we descriptively analyze Khatoon et al.’s protocol and demonstrate that it is vulnerable against known-session-specific temporary information attacks and cannot satisfy perfect forward secrecy. Next, we propose a provably secure and efficient authentication and key agreement protocol using Elliptic Curve Cryptography (ECC). We informally analyze the security of the proposed protocol, and prove that it can satisfy perfect forward secrecy and resist known attacks such as user/server impersonation attack. We also simulate and formally analyze the security of the protocol using the Scyther tool. The results show its robustness against different types of attacks.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"5 1","pages":"85-90"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88703120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964952
Fatemeh Khorshahiyan, S. Shekofteh, Hamid Noori
Nowadays, GPUs are known as one of the most important, most remarkable, and perhaps most popular computing platforms. In recent years, GPUs have increasingly been considered as co-processors and accelerators. Along with growing technology, Graphics Processing Units (GPUs) with more advanced features and capabilities are manufactured and launched by the world's largest commercial companies. Unified memory is one of these new features introduced on the latest generations of Nvidia GPUs which allows programmers to write a program considering the uniform memory shared between CPU and GPU. This feature makes programming considerably easier. The present study introduces this new feature and its attributes. In addition, a model is proposed to predict the execution time of applications if using unified memory style programming based on the information of non-unified style implementation. The proposed model can predict the execution time of a kernel with an average accuracy of 87.60%.
{"title":"Predicting Execution Time of CUDA Kernels with Unified Memory Capability","authors":"Fatemeh Khorshahiyan, S. Shekofteh, Hamid Noori","doi":"10.1109/ICCKE48569.2019.8964952","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964952","url":null,"abstract":"Nowadays, GPUs are known as one of the most important, most remarkable, and perhaps most popular computing platforms. In recent years, GPUs have increasingly been considered as co-processors and accelerators. Along with growing technology, Graphics Processing Units (GPUs) with more advanced features and capabilities are manufactured and launched by the world's largest commercial companies. Unified memory is one of these new features introduced on the latest generations of Nvidia GPUs which allows programmers to write a program considering the uniform memory shared between CPU and GPU. This feature makes programming considerably easier. The present study introduces this new feature and its attributes. In addition, a model is proposed to predict the execution time of applications if using unified memory style programming based on the information of non-unified style implementation. The proposed model can predict the execution time of a kernel with an average accuracy of 87.60%.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"4 1","pages":"437-443"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89248456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8965166
Seyed Muhammad Hossein Mousavi, V. B. Surya Prasath
Audio signal classification is an important field in pattern recognition and signal processing. Classification of musical instruments is a branch of audio signal classification and poses unique challenges due to the diversity of available instruments. Automatic expert systems could assist or be used as a replacement for humans. The aim of this work is to classify Persian musical instruments using combination of extracted features from audio signal. We believe such an automatic system to recognize Persian musical instruments could be very useful in an educational context as well as art universities. Features like Mel-Frequency Cepstrum Coefficients (MFCCs), Spectral Roll-off, Spectral Centroid, Zero Crossing Rate and Entropy Energy are employed and work well for this purpose. These features are extracted from audio signals out of our novel database. This database contains audio samples for 7 Persian musical instrument classes: Ney, Tar, Santur, Kamancheh, Tonbak, Ud and Setar. In feature selection part, Fuzzy entropy measure is employed and classification task takes place by Multi-Layer Neural Network (MLNN). It should be mentioned that this research is one of the first researches on Persian musical instrument classification. Validation confusion matrix made of true positive and false negative rates along with true and false observations numbers. Acquired results are so promising and satisfactory.
{"title":"Persian Classical Music Instrument Recognition (PCMIR) Using a Novel Persian Music Database","authors":"Seyed Muhammad Hossein Mousavi, V. B. Surya Prasath","doi":"10.1109/ICCKE48569.2019.8965166","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8965166","url":null,"abstract":"Audio signal classification is an important field in pattern recognition and signal processing. Classification of musical instruments is a branch of audio signal classification and poses unique challenges due to the diversity of available instruments. Automatic expert systems could assist or be used as a replacement for humans. The aim of this work is to classify Persian musical instruments using combination of extracted features from audio signal. We believe such an automatic system to recognize Persian musical instruments could be very useful in an educational context as well as art universities. Features like Mel-Frequency Cepstrum Coefficients (MFCCs), Spectral Roll-off, Spectral Centroid, Zero Crossing Rate and Entropy Energy are employed and work well for this purpose. These features are extracted from audio signals out of our novel database. This database contains audio samples for 7 Persian musical instrument classes: Ney, Tar, Santur, Kamancheh, Tonbak, Ud and Setar. In feature selection part, Fuzzy entropy measure is employed and classification task takes place by Multi-Layer Neural Network (MLNN). It should be mentioned that this research is one of the first researches on Persian musical instrument classification. Validation confusion matrix made of true positive and false negative rates along with true and false observations numbers. Acquired results are so promising and satisfactory.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"47 1","pages":"122-130"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84949928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8965106
Atefe Asadi Karizaki, M. Tavassoli
High dimensionality is a common challenge in large datasets. Combination of the filter and wrapper methods is used to select the appropriate set of features in these datasets. The hybrid method is desirable, which uses the advantages of both the methods and covers the disadvantages. In this paper, a hybrid method for feature selection in high dimension data is presented. In proposed algorithm, the ReliefF algorithm is used as a filter method for ranking features. Next, the binary dragonfly algorithm (BDA) is applied as a wrapper method. The BDA algorithm uses the ranked features to find optimal set of features incrementally and iteratively. Minimizing the cross-validation loss and decreasing the number of features is considered to evaluate the solution, hierarchically. The proposed algorithm and other compared algorithms run over 5 datasets and the results indicated that the proposed algorithm not only reduce the dimension of dataset but also improve the performance of classifiers on the test data.
{"title":"A novel hybrid feature selection based on ReliefF and binary dragonfly for high dimensional datasets","authors":"Atefe Asadi Karizaki, M. Tavassoli","doi":"10.1109/ICCKE48569.2019.8965106","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8965106","url":null,"abstract":"High dimensionality is a common challenge in large datasets. Combination of the filter and wrapper methods is used to select the appropriate set of features in these datasets. The hybrid method is desirable, which uses the advantages of both the methods and covers the disadvantages. In this paper, a hybrid method for feature selection in high dimension data is presented. In proposed algorithm, the ReliefF algorithm is used as a filter method for ranking features. Next, the binary dragonfly algorithm (BDA) is applied as a wrapper method. The BDA algorithm uses the ranked features to find optimal set of features incrementally and iteratively. Minimizing the cross-validation loss and decreasing the number of features is considered to evaluate the solution, hierarchically. The proposed algorithm and other compared algorithms run over 5 datasets and the results indicated that the proposed algorithm not only reduce the dimension of dataset but also improve the performance of classifiers on the test data.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"34 1","pages":"300-304"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80106973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964971
M. Roshanzamir, M. Palhang, Abdolreza Mirzaei
Genetic Network Programming is an evolutionary algorithm which can be considered as an extension of Genetic Programming but with a graph-structure instead of tree-structure individuals. This algorithm is mainly used for single/multi-agent decision making. It uses a graph to model a strategy that an agent follows to achieve its goal. However, in this algorithm, crossover and mutation operators repeatedly destroy the structures of individuals and make new ones. Although this can lead to better structures, it may also break suitable structures in elite individuals and increase the time needed to achieve optimal solutions. In this research, we modified the evolution process of Genetic Network Programming so that breaking useful structures will be less likely. In the proposed algorithm, the experiences of the best individuals in successive generations are saved. Then, in some specific generations, these experiences are used to generate offspring. The experimental results of the proposed method were tested on two common agent control problem benchmarks namely Tile-world and Pursuit-domain. The results showed the superiority of our method with respect to standard Genetic Network Programming and some of its versions.
{"title":"Tasks Decomposition for Improvement of Genetic Network Programming","authors":"M. Roshanzamir, M. Palhang, Abdolreza Mirzaei","doi":"10.1109/ICCKE48569.2019.8964971","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964971","url":null,"abstract":"Genetic Network Programming is an evolutionary algorithm which can be considered as an extension of Genetic Programming but with a graph-structure instead of tree-structure individuals. This algorithm is mainly used for single/multi-agent decision making. It uses a graph to model a strategy that an agent follows to achieve its goal. However, in this algorithm, crossover and mutation operators repeatedly destroy the structures of individuals and make new ones. Although this can lead to better structures, it may also break suitable structures in elite individuals and increase the time needed to achieve optimal solutions. In this research, we modified the evolution process of Genetic Network Programming so that breaking useful structures will be less likely. In the proposed algorithm, the experiences of the best individuals in successive generations are saved. Then, in some specific generations, these experiences are used to generate offspring. The experimental results of the proposed method were tested on two common agent control problem benchmarks namely Tile-world and Pursuit-domain. The results showed the superiority of our method with respect to standard Genetic Network Programming and some of its versions.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"21 1","pages":"201-206"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73036619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964727
Zahra Mohtajollah, F. Adibnia
Cloud Computing is a computational model that provides all computing services and its requirements over the Internet. So our computation is always available without burdens of carrying large-scale hardware and software. The utilization of resources has been decreasing due to the growth of parallel processing in most parallel applications. Accordingly, job scheduling, one of the fundamental issues in cloud computing, should manage more efficiently. The accuracy of parallel job scheduling is greatly important for cloud providers in order to guarantee the quality of their service. Given that optimal scheduling improves utilization of resources, reduces response time and satisfies user requirements. Most of the current parallel job scheduling algorithms do not use the consolidation of parallel workloads to improve scheduling performance. This paper introduces a scheduling algorithm enriches the powerful ACFCFS algorithm. To begin with, we employ tentative runs, workload consolidation and two-tier virtual machines architecture. Particularly, we consider deadline for jobs in order to prevent starvation of parallel jobs and improve performance. The simulation results indicate that our algorithm considerably reduces the makespan and the maximum waiting time. Therefore it improves scheduling compare to the basic algorithm (ACFCFS). Overall, it can be employed as a strong and effective method for scheduling parallel jobs in the cloud.
{"title":"A Novel Parallel Jobs Scheduling Algorithm in The Cloud Computing","authors":"Zahra Mohtajollah, F. Adibnia","doi":"10.1109/ICCKE48569.2019.8964727","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964727","url":null,"abstract":"Cloud Computing is a computational model that provides all computing services and its requirements over the Internet. So our computation is always available without burdens of carrying large-scale hardware and software. The utilization of resources has been decreasing due to the growth of parallel processing in most parallel applications. Accordingly, job scheduling, one of the fundamental issues in cloud computing, should manage more efficiently. The accuracy of parallel job scheduling is greatly important for cloud providers in order to guarantee the quality of their service. Given that optimal scheduling improves utilization of resources, reduces response time and satisfies user requirements. Most of the current parallel job scheduling algorithms do not use the consolidation of parallel workloads to improve scheduling performance. This paper introduces a scheduling algorithm enriches the powerful ACFCFS algorithm. To begin with, we employ tentative runs, workload consolidation and two-tier virtual machines architecture. Particularly, we consider deadline for jobs in order to prevent starvation of parallel jobs and improve performance. The simulation results indicate that our algorithm considerably reduces the makespan and the maximum waiting time. Therefore it improves scheduling compare to the basic algorithm (ACFCFS). Overall, it can be employed as a strong and effective method for scheduling parallel jobs in the cloud.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"20 1","pages":"243-248"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72786944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8965099
Ali Gorji Sefidmazgi, M. G. Sefidmazgi
Reconstruction of causal network from multivariate time series is an important problem in data science. Regular causality analysis based on Granger method does not consider multiple delays between elements of a causal network. In contrast, the Windowed Granger method not only considers the effect of mutiple delays, but also provides a flexible framework to utilize various linear and nonlinear regression methods within Granger causality analysis. In this work, we have used four methods with Windowed Granger method including hypothesis tests of linear regression, LASSO and random forest. Then, their performance on two simulated and real-world time series are compared with ground truth networks and other causality recovering methods.
{"title":"Recovering Causal Networks based on Windowed Granger Analysis in Multivariate Time Series","authors":"Ali Gorji Sefidmazgi, M. G. Sefidmazgi","doi":"10.1109/ICCKE48569.2019.8965099","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8965099","url":null,"abstract":"Reconstruction of causal network from multivariate time series is an important problem in data science. Regular causality analysis based on Granger method does not consider multiple delays between elements of a causal network. In contrast, the Windowed Granger method not only considers the effect of mutiple delays, but also provides a flexible framework to utilize various linear and nonlinear regression methods within Granger causality analysis. In this work, we have used four methods with Windowed Granger method including hypothesis tests of linear regression, LASSO and random forest. Then, their performance on two simulated and real-world time series are compared with ground truth networks and other causality recovering methods.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"103 1","pages":"170-175"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80927255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964975
Nastsrsn Pardakhti, H. Sajedi
Human Brain Age has become a popular aging biomarker and is used to detect the differences among healthy subjects. It is also used as a health biomarker between the group of normal subjects and the group of patients. Machine Learning (ML) prediction models and especially Deep Learning (DL) systems are rapidly grown up in the field of Brain Age Estimation (BAE) to present a disease detection system. In this paper, a DL method based on 3D-CNN is designed to get an accurate result of BAE. The training dataset is selected from the IXI (Information eXtraction from Images) MRI data repository. In addition, it is aimed to decrease the computations required by the deep model on the 3D MRI images. It is generally done by removing the unnecessary parts of brain 3D images. First, the deep 3D-CNN model is trained by healthy MRI data of IXI dataset which are normalized by SPM. Next, some experiments are done due to decrease the computations while saving the total performance. The best-achieved Mean Absolute Error (MAE) is 5.813 years.
人脑年龄已成为一种流行的衰老生物标志物,用于检测健康受试者之间的差异。它也被用作正常受试者组和患者组之间的健康生物标志物。机器学习(ML)预测模型,特别是深度学习(DL)系统在脑年龄估计(BAE)领域迅速发展,呈现出一种疾病检测系统。本文设计了一种基于3D-CNN的深度学习方法,以获得准确的BAE结果。训练数据集选自IXI (Information eXtraction from Images) MRI数据库。此外,该方法还旨在减少深度模型对三维MRI图像的计算量。它通常是通过去除大脑3D图像中不必要的部分来完成的。首先,采用SPM归一化后的IXI数据集的健康MRI数据训练深度3D-CNN模型;其次,为了在节省总性能的同时减少计算量,进行了一些实验。最佳平均绝对误差(MAE)为5.813年。
{"title":"Brain Age Estimation using Brain MRI and 3D Convolutional Neural Network","authors":"Nastsrsn Pardakhti, H. Sajedi","doi":"10.1109/ICCKE48569.2019.8964975","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964975","url":null,"abstract":"Human Brain Age has become a popular aging biomarker and is used to detect the differences among healthy subjects. It is also used as a health biomarker between the group of normal subjects and the group of patients. Machine Learning (ML) prediction models and especially Deep Learning (DL) systems are rapidly grown up in the field of Brain Age Estimation (BAE) to present a disease detection system. In this paper, a DL method based on 3D-CNN is designed to get an accurate result of BAE. The training dataset is selected from the IXI (Information eXtraction from Images) MRI data repository. In addition, it is aimed to decrease the computations required by the deep model on the 3D MRI images. It is generally done by removing the unnecessary parts of brain 3D images. First, the deep 3D-CNN model is trained by healthy MRI data of IXI dataset which are normalized by SPM. Next, some experiments are done due to decrease the computations while saving the total performance. The best-achieved Mean Absolute Error (MAE) is 5.813 years.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"115 1","pages":"386-390"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77889615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8965190
Mahta Bakhshizadeh, A. Moeini, Mina Latifi, M. Mahmoudi
The increase of receiving attention to music recommendation and playlist generation in today’s music industry is undeniable. One of the main goals is to generate personalized playlists automatically for each user. Beyond that, an appropriate switching among these playlists to play the tracks based on the current mood of the user would certainly lead to the development of more advanced and personalized music player apps. In this paper, a data scientific approach is provided to model the music moods which are created by clustering the tracks extracted from users’ listening. Each Cluster consists of music tracks with similar audio features existing in the user’s listening history. Knowing which music track is currently being listened by users, their mood would be specified by determining the cluster of that music. It is presumed that playing the other music tracks contained in the same cluster as the next tracks will enhance their satisfaction. A suggestion for making the results visually interpretable which could help the corresponding music players with GUI design is provided as well. Experimental results of a case study from real datasets collected from Users’ listening history on Last.fm benefiting from Spotify API clarifies the framework along with supporting the mentioned presumption.
{"title":"Automated Mood Based Music Playlist Generation By Clustering The Audio Features","authors":"Mahta Bakhshizadeh, A. Moeini, Mina Latifi, M. Mahmoudi","doi":"10.1109/ICCKE48569.2019.8965190","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8965190","url":null,"abstract":"The increase of receiving attention to music recommendation and playlist generation in today’s music industry is undeniable. One of the main goals is to generate personalized playlists automatically for each user. Beyond that, an appropriate switching among these playlists to play the tracks based on the current mood of the user would certainly lead to the development of more advanced and personalized music player apps. In this paper, a data scientific approach is provided to model the music moods which are created by clustering the tracks extracted from users’ listening. Each Cluster consists of music tracks with similar audio features existing in the user’s listening history. Knowing which music track is currently being listened by users, their mood would be specified by determining the cluster of that music. It is presumed that playing the other music tracks contained in the same cluster as the next tracks will enhance their satisfaction. A suggestion for making the results visually interpretable which could help the corresponding music players with GUI design is provided as well. Experimental results of a case study from real datasets collected from Users’ listening history on Last.fm benefiting from Spotify API clarifies the framework along with supporting the mentioned presumption.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"316 1","pages":"231-237"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76143802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}