Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964913
Zahra Moti, S. Hashemi, Amir Namavar
Detecting malware sample is one of the most important issues in computer security. Malware variants are growing exponentially by more usage of computer in industries, homes, and other places. Among different types of malware samples, zero-day samples are more challenging. The conventional antivirus systems, which rely on known malware patterns, cannot detect zero-day samples since did not see them before. As reported in [1], in 2018, 76% of successful attacks on organization endpoints were based on zero-day samples. Therefore, predicting these types of attacks and preparing a solution is an open challenge.This paper presents a deep generative adversarial network to generate the signature of unseen malware samples; The generated signature is potentially similar to the malware samples that may be released in the future. After generating the samples, these generated data were added to the dataset to train a robust classifier against new variants of malware. Also, neural network is applied for extracting high-level features from raw bytes for detection. In the proposed method, only the header of the executable file was used for detection, which is a small piece of the file that contains some information about the file. To validate our method, we used three classification algorithms and classified the raw and new representation using them. Also, we compared our work with another malware detection using the PE header. The results of this paper show that the generated data improves the accuracy of classification algorithms by at least 1%.
{"title":"Discovering Future Malware Variants By Generating New Malware Samples Using Generative Adversarial Network","authors":"Zahra Moti, S. Hashemi, Amir Namavar","doi":"10.1109/ICCKE48569.2019.8964913","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964913","url":null,"abstract":"Detecting malware sample is one of the most important issues in computer security. Malware variants are growing exponentially by more usage of computer in industries, homes, and other places. Among different types of malware samples, zero-day samples are more challenging. The conventional antivirus systems, which rely on known malware patterns, cannot detect zero-day samples since did not see them before. As reported in [1], in 2018, 76% of successful attacks on organization endpoints were based on zero-day samples. Therefore, predicting these types of attacks and preparing a solution is an open challenge.This paper presents a deep generative adversarial network to generate the signature of unseen malware samples; The generated signature is potentially similar to the malware samples that may be released in the future. After generating the samples, these generated data were added to the dataset to train a robust classifier against new variants of malware. Also, neural network is applied for extracting high-level features from raw bytes for detection. In the proposed method, only the header of the executable file was used for detection, which is a small piece of the file that contains some information about the file. To validate our method, we used three classification algorithms and classified the raw and new representation using them. Also, we compared our work with another malware detection using the PE header. The results of this paper show that the generated data improves the accuracy of classification algorithms by at least 1%.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"20 1","pages":"319-324"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88121241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8965200
Leila Hassanlou, S. Meshgini, E. Alizadeh, A. Farzamnia
Stem cells are a bunch of cells that are considered as encouraging cells for treating patients because of their ability to regenerate themselves and also their potential for differentiation into several lineages. When stem cells are differentiated into adipose tissues, a great variety of lipid droplets usually grow in these cells and can be observed by oil red O staining, which is typically used for evaluating adipocyte differentiation status. For numerous differentiation experiments, counting and calculation of the population of lipid droplets are necessary. The disadvantages of conducting experiments for identification and investigation of lipid droplets include being expensive, time-consuming and subjective. There are few studies carried out in the field of machine learning and image processing for the automatic detection and counting of lipid droplets in intracellular images. In this study, to demonstrate the adipocyte differentiation of mesenchymal stem cells, their microscopic images were prepared. After the preprocessing operation, the images were fed to a tiny convolutional neural network. Images created within the network output were examined using two image processing methods. Finally, the number of lipid droplets was obtained with acceptable accuracy, and their exact location was displayed.
{"title":"Detection and Counting of Lipid Droplets in Adipocyte Differentiation of Bone Marrow-Derived Mesenchymal Stem Cells Using a Tiny Convolutional Network and Image Processing","authors":"Leila Hassanlou, S. Meshgini, E. Alizadeh, A. Farzamnia","doi":"10.1109/ICCKE48569.2019.8965200","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8965200","url":null,"abstract":"Stem cells are a bunch of cells that are considered as encouraging cells for treating patients because of their ability to regenerate themselves and also their potential for differentiation into several lineages. When stem cells are differentiated into adipose tissues, a great variety of lipid droplets usually grow in these cells and can be observed by oil red O staining, which is typically used for evaluating adipocyte differentiation status. For numerous differentiation experiments, counting and calculation of the population of lipid droplets are necessary. The disadvantages of conducting experiments for identification and investigation of lipid droplets include being expensive, time-consuming and subjective. There are few studies carried out in the field of machine learning and image processing for the automatic detection and counting of lipid droplets in intracellular images. In this study, to demonstrate the adipocyte differentiation of mesenchymal stem cells, their microscopic images were prepared. After the preprocessing operation, the images were fed to a tiny convolutional neural network. Images created within the network output were examined using two image processing methods. Finally, the number of lipid droplets was obtained with acceptable accuracy, and their exact location was displayed.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"24 1","pages":"176-181"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87854943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964698
Abdorreza Sharifihosseini
As with many other businesses, banking industry tends to digitalize its working processes and use state-of-the-art technique in the financial and commercial areas in its business. The main core of the bank business is managing communication with customers which eventually results in investment on customers. In this paper, the structure of a recommender system is described, whereby using the recommender technology the places for purchase in which so far, the customers have not used any special type of Bon cards but are probable to buy from them is estimated and proposed to the customers.Matrix factorization is a type of method for collaborative filtering based on models which is widely used for rating prediction concept. Generally, bank products are not rated by customers; these products are usually purchased or offered to customers by the bank. Therefore, to determine the rating, RFM 1 method which is an instrument for analysis in marketing is used along with clustering algorithm to determine the customer value and place. If a place does not have any value, i.e. the data have missing values, it suggests that we do not know whether the customer prefers the place for purchase or not. In this paper, a hybrid method based on dimension reduction technique is presented. This method is able to predict the missing values in data to offer recommendation to customers. Assessment of the proposed model through Root Mean Square Error 2 indicates that the architecture in this paper has less error in comparison to common collaborative filtering methods.
{"title":"A Case Study for Presenting Bank Recommender Systems based on Bon Card Transaction Data","authors":"Abdorreza Sharifihosseini","doi":"10.1109/ICCKE48569.2019.8964698","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964698","url":null,"abstract":"As with many other businesses, banking industry tends to digitalize its working processes and use state-of-the-art technique in the financial and commercial areas in its business. The main core of the bank business is managing communication with customers which eventually results in investment on customers. In this paper, the structure of a recommender system is described, whereby using the recommender technology the places for purchase in which so far, the customers have not used any special type of Bon cards but are probable to buy from them is estimated and proposed to the customers.Matrix factorization is a type of method for collaborative filtering based on models which is widely used for rating prediction concept. Generally, bank products are not rated by customers; these products are usually purchased or offered to customers by the bank. Therefore, to determine the rating, RFM 1 method which is an instrument for analysis in marketing is used along with clustering algorithm to determine the customer value and place. If a place does not have any value, i.e. the data have missing values, it suggests that we do not know whether the customer prefers the place for purchase or not. In this paper, a hybrid method based on dimension reduction technique is presented. This method is able to predict the missing values in data to offer recommendation to customers. Assessment of the proposed model through Root Mean Square Error 2 indicates that the architecture in this paper has less error in comparison to common collaborative filtering methods.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"29 1","pages":"72-77"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73598463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964738
Farzam Dorostkar, S. Mirzakuchaki
The emergence of heterogeneous computing systems has been accompanied by serious design issues. Being a highly influential factor on performance in these systems, application scheduling is one of the major design considerations. In this paper, we propose a new critical path-oriented list scheduling heuristic algorithm called Communication-Intensive Path on a Processor (CIPOP) for heterogeneous computing environments. It is a modification of the well-known CPOP algorithm that presented the idea of scheduling the most costly entry-exit path of tasks, commonly known as the critical path, on a single processor. Generally, this processor selection strategy has different potential impacts on computation and communication costs along a selected path in the produced schedule. However, these probably different effects are not considered in the common definition of a critical path. Differentiating between these two types of costs, the proposed algorithm introduces a novel performance-effective definition for a critical path that is compatible with this processor selection strategy. CIPOP has a time complexity the same as that of the state-of-the-art list scheduling heuristic algorithms, which is of the order O(v2.× p) for v tasks and p processors. The conducted comprehensive experiment based on a wide variety of randomly generated application DAGs demonstrates the performance improvement of the proposed algorithm.
{"title":"List Scheduling for Heterogeneous Computing Systems Introducing a Performance-Effective Definition for Critical Path","authors":"Farzam Dorostkar, S. Mirzakuchaki","doi":"10.1109/ICCKE48569.2019.8964738","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964738","url":null,"abstract":"The emergence of heterogeneous computing systems has been accompanied by serious design issues. Being a highly influential factor on performance in these systems, application scheduling is one of the major design considerations. In this paper, we propose a new critical path-oriented list scheduling heuristic algorithm called Communication-Intensive Path on a Processor (CIPOP) for heterogeneous computing environments. It is a modification of the well-known CPOP algorithm that presented the idea of scheduling the most costly entry-exit path of tasks, commonly known as the critical path, on a single processor. Generally, this processor selection strategy has different potential impacts on computation and communication costs along a selected path in the produced schedule. However, these probably different effects are not considered in the common definition of a critical path. Differentiating between these two types of costs, the proposed algorithm introduces a novel performance-effective definition for a critical path that is compatible with this processor selection strategy. CIPOP has a time complexity the same as that of the state-of-the-art list scheduling heuristic algorithms, which is of the order O(v2.× p) for v tasks and p processors. The conducted comprehensive experiment based on a wide variety of randomly generated application DAGs demonstrates the performance improvement of the proposed algorithm.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"10 1","pages":"356-362"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78915929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8965047
Ardeshir Mansouri, Mohammad Ordikhani, M. S. Abadeh, Masih Tajdini
Syncope or fainting refers to a temporary loss of consciousness usually related to insufficient blood flow to the brain and can be due to several causes, which are either simple or serious conditions. Syncope can be caused by life-threatening conditions not evident in the first evaluations, which can lead to serious outcomes, including death, after discharge from the hospital. We have developed a decision tool to identify syncope patients with 18 years of age or higher who are at risk of a serious event within 30 days after discharge from the hospital.We used the data provided by the Tehran Heart Clinic. In this dataset adults with 18 years old or above with syncope signs are enrolled. The patients presented themselves within 24 hours after the event to the THC. Standardized variables from clinical evaluation and investigations have been collected. Serious adverse events included death, Intracerebral hemorrhage (ICH) or Subarachnoid hemorrhage (SAH), Cerebrovascular accident (CVA), Device Implantation, myocardial infarction, arrhythmia, traumatic syncope or cardiac surgery within 30 days. 356 patients were enrolled with syncope; the mean age was 44.5 years and 53.6% were women. Serious events occurred among 26 (7.3%) of the patients within 30 days of discharge from the hospital.Different machine learning algorithms such as Decision Tree, SMO, Neural Networks, Naïve Bayes and Random Forest have been used on the dataset to predict patients with serious adverse outcomes and the WEKA program has been used to validate the results.Results show that when using Random Forrest Algorithm, the accuracy rate and ROC Area reached 91.09% and 0.90. However, previous statistical risk scores such as the San Francisco Score resulted in lower ROC Area readings.
{"title":"Predicting Serious Outcomes in Syncope Patients Using Data Mining Techniques","authors":"Ardeshir Mansouri, Mohammad Ordikhani, M. S. Abadeh, Masih Tajdini","doi":"10.1109/ICCKE48569.2019.8965047","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8965047","url":null,"abstract":"Syncope or fainting refers to a temporary loss of consciousness usually related to insufficient blood flow to the brain and can be due to several causes, which are either simple or serious conditions. Syncope can be caused by life-threatening conditions not evident in the first evaluations, which can lead to serious outcomes, including death, after discharge from the hospital. We have developed a decision tool to identify syncope patients with 18 years of age or higher who are at risk of a serious event within 30 days after discharge from the hospital.We used the data provided by the Tehran Heart Clinic. In this dataset adults with 18 years old or above with syncope signs are enrolled. The patients presented themselves within 24 hours after the event to the THC. Standardized variables from clinical evaluation and investigations have been collected. Serious adverse events included death, Intracerebral hemorrhage (ICH) or Subarachnoid hemorrhage (SAH), Cerebrovascular accident (CVA), Device Implantation, myocardial infarction, arrhythmia, traumatic syncope or cardiac surgery within 30 days. 356 patients were enrolled with syncope; the mean age was 44.5 years and 53.6% were women. Serious events occurred among 26 (7.3%) of the patients within 30 days of discharge from the hospital.Different machine learning algorithms such as Decision Tree, SMO, Neural Networks, Naïve Bayes and Random Forest have been used on the dataset to predict patients with serious adverse outcomes and the WEKA program has been used to validate the results.Results show that when using Random Forrest Algorithm, the accuracy rate and ROC Area reached 91.09% and 0.90. However, previous statistical risk scores such as the San Francisco Score resulted in lower ROC Area readings.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"18 1","pages":"409-413"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80297258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964735
Somayeh Adeli, M. P. Aghababa
Metasearch engine is a system that applies several different search engines, merges the returned results from the search engines and presents the best results. Principal component of the metasearch engine is the method applied for merging the given results. The most of existing merging algorithms are relied on the information achieved by ranking scores which is integrated with the results of different search engines. In this paper, a reformed genetic algorithm (RGA) is proposed for aggregating results of different search engines. In the RGA, a chaotic sequence is applied to select the parents to mate, preventing the RGA to get stuck in local optima. The combination of pitch adjustment rule and uniform crossover (CPARU) is also proposed to further mutate of chromosomes. In the problem of optimizing search engine results, the proposed method tries to find weights of documents’ place to allocate each document to the best place. Therefore, the only required information is to know the number of the search engines that finds each document in the corresponding place. Accordingly, this technique works independently of the different search engines’ ranking scores. The experimental results have depicted that the RGA outperforms the genetic algorithm (GA), Borda method, Kendall-tau genetic algorithm (GKTu) and Spearmen's footrule genetic algorithm (GSFD) methods.
{"title":"Metasearch engine result optimization using reformed genetic algorithm","authors":"Somayeh Adeli, M. P. Aghababa","doi":"10.1109/ICCKE48569.2019.8964735","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964735","url":null,"abstract":"Metasearch engine is a system that applies several different search engines, merges the returned results from the search engines and presents the best results. Principal component of the metasearch engine is the method applied for merging the given results. The most of existing merging algorithms are relied on the information achieved by ranking scores which is integrated with the results of different search engines. In this paper, a reformed genetic algorithm (RGA) is proposed for aggregating results of different search engines. In the RGA, a chaotic sequence is applied to select the parents to mate, preventing the RGA to get stuck in local optima. The combination of pitch adjustment rule and uniform crossover (CPARU) is also proposed to further mutate of chromosomes. In the problem of optimizing search engine results, the proposed method tries to find weights of documents’ place to allocate each document to the best place. Therefore, the only required information is to know the number of the search engines that finds each document in the corresponding place. Accordingly, this technique works independently of the different search engines’ ranking scores. The experimental results have depicted that the RGA outperforms the genetic algorithm (GA), Borda method, Kendall-tau genetic algorithm (GKTu) and Spearmen's footrule genetic algorithm (GSFD) methods.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"37 1","pages":"18-25"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80213011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964761
Zahra Soleimanitaleb, Mohammad Ali Keyvanrad, Ali Jafari
Object tracking is one of the most important tasks in computer vision that has many practical applications such as traffic monitoring, robotics, autonomous vehicle tracking, and so on. Different researches have been done in recent years, but because of different challenges such as occlusion, illumination variations, fast motion, etc. researches in this area continues. In this paper, various methods of tracking objects are examined and a comprehensive classification is presented that classified tracking methods into four main categories of feature-based, segmentation-based, estimation-based, and learning-based methods that each of which has its own sub-categories. The main focus of this paper is on learning-based methods, which are classified into three categories of generative methods, discriminative methods, and reinforcement learning. One of the sub-categories of the discriminative model is deep learning. Because of high-performance, deep learning has recently been very much considered.
{"title":"Object Tracking Methods:A Review","authors":"Zahra Soleimanitaleb, Mohammad Ali Keyvanrad, Ali Jafari","doi":"10.1109/ICCKE48569.2019.8964761","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964761","url":null,"abstract":"Object tracking is one of the most important tasks in computer vision that has many practical applications such as traffic monitoring, robotics, autonomous vehicle tracking, and so on. Different researches have been done in recent years, but because of different challenges such as occlusion, illumination variations, fast motion, etc. researches in this area continues. In this paper, various methods of tracking objects are examined and a comprehensive classification is presented that classified tracking methods into four main categories of feature-based, segmentation-based, estimation-based, and learning-based methods that each of which has its own sub-categories. The main focus of this paper is on learning-based methods, which are classified into three categories of generative methods, discriminative methods, and reinforcement learning. One of the sub-categories of the discriminative model is deep learning. Because of high-performance, deep learning has recently been very much considered.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"1 1","pages":"282-288"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90869203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8965020
A. Bastanfard, Dariush Amirkhani, Moslem abbasiasl
Viewing heavenly objects in the sky helps astronomers understand how the world is shaped. Regarding the large number of objects observed by modern telescopes, it is very difficult to manually analyze it manually. An important part of galactic research is classification based on Hubble's design. The purpose of this research is to classify images of the stars using machine learning and neural networks. Particularly in this study, the galaxy's image is employed. The galaxies are divided into regular two-dimensional Hubble designs and an irregular bunch. The regular bands that are presented in the shape of the Hubble design are divided into two distinct spiral and elliptical galaxies. Spiral galaxies can be considered as elliptical or circular galaxies depending on the shape of the spiral, so the identification or classification of the spiral galaxy is considered important from other galaxies. In the proposed algorithm, the Sloan Digital Sky is used for testing, including 570 images. In the first step, its preprocessing operation is performed to remove image noise. In the next step, extracting the attribute from the galactic images takes place in a total of 827 properties using the sub-windows, the moments of different color spaces and the properties of the local configuration patterns. Then the classification is performed after extracting the property using a Support vector machine. And then compared with other methods, which indicate that our approach has worked better. In this study, the experiments were carried out in two spiral and elliptic classes and three spiral, elliptic and zinc-edged classes with a precision of 96 and 94 respectively.
{"title":"Automatic Classification of Galaxies Based on SVM","authors":"A. Bastanfard, Dariush Amirkhani, Moslem abbasiasl","doi":"10.1109/ICCKE48569.2019.8965020","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8965020","url":null,"abstract":"Viewing heavenly objects in the sky helps astronomers understand how the world is shaped. Regarding the large number of objects observed by modern telescopes, it is very difficult to manually analyze it manually. An important part of galactic research is classification based on Hubble's design. The purpose of this research is to classify images of the stars using machine learning and neural networks. Particularly in this study, the galaxy's image is employed. The galaxies are divided into regular two-dimensional Hubble designs and an irregular bunch. The regular bands that are presented in the shape of the Hubble design are divided into two distinct spiral and elliptical galaxies. Spiral galaxies can be considered as elliptical or circular galaxies depending on the shape of the spiral, so the identification or classification of the spiral galaxy is considered important from other galaxies. In the proposed algorithm, the Sloan Digital Sky is used for testing, including 570 images. In the first step, its preprocessing operation is performed to remove image noise. In the next step, extracting the attribute from the galactic images takes place in a total of 827 properties using the sub-windows, the moments of different color spaces and the properties of the local configuration patterns. Then the classification is performed after extracting the property using a Support vector machine. And then compared with other methods, which indicate that our approach has worked better. In this study, the experiments were carried out in two spiral and elliptic classes and three spiral, elliptic and zinc-edged classes with a precision of 96 and 94 respectively.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"1 1","pages":"32-39"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84367839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964932
Zolfaghar Salmanian, Habib Izadkhah, A. Isazadeh
In the Infrastructure-as-a-Service model of the cloud computing paradigm, virtual machines are deployed on bare-metal servers called hosts. The host is responsible for the allocation of required resources such as CPU, RAM memory, and network bandwidth for the virtual machine. Thus, the problem of resource allocation reduces to how to place the virtual machines on physical hosts. In this paper, we propose CTMC modeling based on the birth-death process of the queueing systems for the performance of the data center. We will focus on RAM allocation for virtual machines. In this architecture, a job is defined as RAM assignment for a virtual machine. Job arrivals and their service times are assumed to be based on the Poisson process and exponential distribution, respectively. The purpose of this modeling is to keep the number of running hosts minimal in a scalable datacenter while the quality of service in terms of response time is acceptable due to system utilization.
{"title":"Resource Provisioning in IaaS Clouds; Auto-Scale RAM memory issue","authors":"Zolfaghar Salmanian, Habib Izadkhah, A. Isazadeh","doi":"10.1109/ICCKE48569.2019.8964932","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964932","url":null,"abstract":"In the Infrastructure-as-a-Service model of the cloud computing paradigm, virtual machines are deployed on bare-metal servers called hosts. The host is responsible for the allocation of required resources such as CPU, RAM memory, and network bandwidth for the virtual machine. Thus, the problem of resource allocation reduces to how to place the virtual machines on physical hosts. In this paper, we propose CTMC modeling based on the birth-death process of the queueing systems for the performance of the data center. We will focus on RAM allocation for virtual machines. In this architecture, a job is defined as RAM assignment for a virtual machine. Job arrivals and their service times are assumed to be based on the Poisson process and exponential distribution, respectively. The purpose of this modeling is to keep the number of running hosts minimal in a scalable datacenter while the quality of service in terms of response time is acceptable due to system utilization.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"48 1","pages":"455-460"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81795000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964762
Sima Naderi Mighan, M. Kahani, F. Pourgholamali
With the development and popularity of social networks, many human beings prefer to share their experiences on these networks. There are various methods proposed by the researcher which utilized user-generated content in the location-based social networks (LBSN) and recommend locations to users. However, there is a high sparsity in the user check-in information makes it tough to recommend the appropriate and accurate location to the user. To fix this issue, we put forward a proposal as a framework which utilizes a wide range of information available in these networks, each of which has its own type and provides appropriate recommendation. For this purpose, we encode the information as a number of entities and its attributes in the form of a heterogeneous graph, then graph embedding methods are used to embed all nodes in unified semantic representation space. As a result, we are able to model relations between users and venues in an efficient way and ameliorate the accuracy of the proposed method that recommends a place to a user. Our method is implemented and evaluated using Foursquare dataset, and the evaluation results depict that our work, boost performance in terms of precision, recall, and f-measure compared to the baseline work.
{"title":"POI Recommendation Based on Heterogeneous Graph Embedding","authors":"Sima Naderi Mighan, M. Kahani, F. Pourgholamali","doi":"10.1109/ICCKE48569.2019.8964762","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964762","url":null,"abstract":"With the development and popularity of social networks, many human beings prefer to share their experiences on these networks. There are various methods proposed by the researcher which utilized user-generated content in the location-based social networks (LBSN) and recommend locations to users. However, there is a high sparsity in the user check-in information makes it tough to recommend the appropriate and accurate location to the user. To fix this issue, we put forward a proposal as a framework which utilizes a wide range of information available in these networks, each of which has its own type and provides appropriate recommendation. For this purpose, we encode the information as a number of entities and its attributes in the form of a heterogeneous graph, then graph embedding methods are used to embed all nodes in unified semantic representation space. As a result, we are able to model relations between users and venues in an efficient way and ameliorate the accuracy of the proposed method that recommends a place to a user. Our method is implemented and evaluated using Foursquare dataset, and the evaluation results depict that our work, boost performance in terms of precision, recall, and f-measure compared to the baseline work.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"1 1","pages":"188-193"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83943320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}