Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964764
Milad Niazi-Razavi, Abdorreza Savadi, Hamid Noori
low power consumption and high efficiency of heterogeneous systems improves processing power and enables the implementation of real-time applications. Deep learning, as one of the hottest topics of today, plays an important role in solving difficult problems such as machine vision. The use of traditional methods for solving visual machine problems requires the engineering of features by humans, which makes it difficult to create a comprehensive model for a problem. The use of revolutionary deep learning in the machine vision, which along with the embedded systems can be useful in many today's issues. Convolutional neural networks have shown a high degree of efficiency in the task of categorizing images and detecting objects. An important feature in neural networks is the intrinsic parallelism of its structure, which results in the use of embedded heterogeneous systems that can provide excellent performance in the implementation of neural networks. Implementing real-time objects detection systems in enclosed environments with limited computing resources and memory is challenging. This paper presents a method for implementing the MobileNet-SSD object detection system on the Jetson TK1, which attempts to improve performance by changing the network's convoys and dividing tasks between the central and the graphics processor.
{"title":"Toward real-time object detection on heterogeneous embedded systems","authors":"Milad Niazi-Razavi, Abdorreza Savadi, Hamid Noori","doi":"10.1109/ICCKE48569.2019.8964764","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964764","url":null,"abstract":"low power consumption and high efficiency of heterogeneous systems improves processing power and enables the implementation of real-time applications. Deep learning, as one of the hottest topics of today, plays an important role in solving difficult problems such as machine vision. The use of traditional methods for solving visual machine problems requires the engineering of features by humans, which makes it difficult to create a comprehensive model for a problem. The use of revolutionary deep learning in the machine vision, which along with the embedded systems can be useful in many today's issues. Convolutional neural networks have shown a high degree of efficiency in the task of categorizing images and detecting objects. An important feature in neural networks is the intrinsic parallelism of its structure, which results in the use of embedded heterogeneous systems that can provide excellent performance in the implementation of neural networks. Implementing real-time objects detection systems in enclosed environments with limited computing resources and memory is challenging. This paper presents a method for implementing the MobileNet-SSD object detection system on the Jetson TK1, which attempts to improve performance by changing the network's convoys and dividing tasks between the central and the graphics processor.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"21 1","pages":"450-454"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81508227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8965143
Fatemeh Bashir Gonbadi, Hassan Khotanlou
Brain tumor analysis is a critical field in medical image processing. Glioma is one of the threatening brain tumors originating from glial cells and is divided into two grades according to the World Health Organization (WHO). In this paper, a novel method based on Convolutional Neural Networks (CNN) is presented to diagnose and classify Glioma tumors in Magnetic Resonance Imaging (MRI) images into three classes: Normal Brain, High-Grade Glioma and Low-Grade Glioma. The proposed method includes 2 parts: preprocessing unit and network. Preprocessing unit extracts brain from skull and the obtained image is fed into a CNN network to be classified. The network extracts primary features from images and creates feature maps. Then the second part of the network extracts secondary features from the feature maps and finally classifies them. The datasets used in this paper are IXI dataset as normal brain images and BRATS2017 dataset as Glioma tumor images. This method classifies the MRI images into three categories, performed with a desirable accuracy of 99.18%.
{"title":"Glioma Brain Tumors Diagnosis and Classification in MR Images based on Convolutional Neural Networks","authors":"Fatemeh Bashir Gonbadi, Hassan Khotanlou","doi":"10.1109/ICCKE48569.2019.8965143","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8965143","url":null,"abstract":"Brain tumor analysis is a critical field in medical image processing. Glioma is one of the threatening brain tumors originating from glial cells and is divided into two grades according to the World Health Organization (WHO). In this paper, a novel method based on Convolutional Neural Networks (CNN) is presented to diagnose and classify Glioma tumors in Magnetic Resonance Imaging (MRI) images into three classes: Normal Brain, High-Grade Glioma and Low-Grade Glioma. The proposed method includes 2 parts: preprocessing unit and network. Preprocessing unit extracts brain from skull and the obtained image is fed into a CNN network to be classified. The network extracts primary features from images and creates feature maps. Then the second part of the network extracts secondary features from the feature maps and finally classifies them. The datasets used in this paper are IXI dataset as normal brain images and BRATS2017 dataset as Glioma tumor images. This method classifies the MRI images into three categories, performed with a desirable accuracy of 99.18%.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"22 2 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89099950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964884
Mohammad Ahmadi Ganjei, R. Boostani
To confront with high dimensional datasets, several feature selection schemes have been suggested in three types of wrapper, filter, and hybrid. Hybrid feature selection methods adopt both filter and wrapper approaches by compromising between the computational complexity and efficiency. In this paper, we proposed a new hybrid feature selection method, in which in the filter stage the features are ranked according to their relevance. Instead of running the wrapper on all the features, we use a split-to-blocks technique and show that block size has a considerable impact on performance. A sequential forward selection (SFS) method was applied to the ranked blocks of features in order to find the most relevant features. The proposed method rapidly eliminates a large number of irrelevant features in its ranking stage, and then different block sizes were evaluated in the wrapper phase by choosing a proper block size using SFS. It causes this method to have a low time complexity, despite the good results. Hybrid methods consist of components that have different criteria for them. we compare and analyze different criteria. To show the effectiveness of the proposed method, state-of-the-art hybrid feature selection methods like re-Ranking, IGIS, and IGIS+ were implemented and their classification accuracies, over the known benchmarks, were computed using the K-nearest neighbor (KNN) and decision tree classifiers. Applying statistical tests to the compared results supports the superiority of the proposed method to the counterparts.
{"title":"A Fast Hybrid Feature Selection Method","authors":"Mohammad Ahmadi Ganjei, R. Boostani","doi":"10.1109/ICCKE48569.2019.8964884","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964884","url":null,"abstract":"To confront with high dimensional datasets, several feature selection schemes have been suggested in three types of wrapper, filter, and hybrid. Hybrid feature selection methods adopt both filter and wrapper approaches by compromising between the computational complexity and efficiency. In this paper, we proposed a new hybrid feature selection method, in which in the filter stage the features are ranked according to their relevance. Instead of running the wrapper on all the features, we use a split-to-blocks technique and show that block size has a considerable impact on performance. A sequential forward selection (SFS) method was applied to the ranked blocks of features in order to find the most relevant features. The proposed method rapidly eliminates a large number of irrelevant features in its ranking stage, and then different block sizes were evaluated in the wrapper phase by choosing a proper block size using SFS. It causes this method to have a low time complexity, despite the good results. Hybrid methods consist of components that have different criteria for them. we compare and analyze different criteria. To show the effectiveness of the proposed method, state-of-the-art hybrid feature selection methods like re-Ranking, IGIS, and IGIS+ were implemented and their classification accuracies, over the known benchmarks, were computed using the K-nearest neighbor (KNN) and decision tree classifiers. Applying statistical tests to the compared results supports the superiority of the proposed method to the counterparts.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"1 1","pages":"6-11"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89820358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964808
M. Noei, M. S. Abadeh
In this paper, we suggest a new technique that significantly improve the computational time of the genetic algorithm for imputing missing values. Data contain noise and missing values, which made them unreliable for scientific purposes. Due to this, we are required to preprocess these data before using them. Researchers either avoid or impute missing data. It is necessary to choose an appropriate imputation method, and it is based on several factors such as datatypes and numbers of missing data. For a higher missing value rate, missing value imputation (MVI) can be suitable way for imputing missing data in incomplete dataset. One of the MVI methods is the genetic algorithm; although genetic algorithm may produce good results, the computational time is very high. The proposed algorithm is a combination of the genetic and Asexual Reproduction Optimization (ARO) algorithm. We present an experimental evaluation of Pima and mammographic mass dataset that collected from UCI repository. In the small percentage of missing values, those instances can be imputed by the ARO algorithm, but in the case of large amounts, our approach illustrates much better results. This proposed technique works even better when the rate of missing values is higher. The accuracy and computational time of our proposed algorithm are compared with another techniques like Mean, K-Nearest Neighbor, and SVM. On average our approach 8% improved the accuracy and 4% improved the ROC, and it requires less computational time than a basic genetic algorithm.
{"title":"A Genetic Asexual Reproduction Optimization Algorithm for Imputing Missing Values","authors":"M. Noei, M. S. Abadeh","doi":"10.1109/ICCKE48569.2019.8964808","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964808","url":null,"abstract":"In this paper, we suggest a new technique that significantly improve the computational time of the genetic algorithm for imputing missing values. Data contain noise and missing values, which made them unreliable for scientific purposes. Due to this, we are required to preprocess these data before using them. Researchers either avoid or impute missing data. It is necessary to choose an appropriate imputation method, and it is based on several factors such as datatypes and numbers of missing data. For a higher missing value rate, missing value imputation (MVI) can be suitable way for imputing missing data in incomplete dataset. One of the MVI methods is the genetic algorithm; although genetic algorithm may produce good results, the computational time is very high. The proposed algorithm is a combination of the genetic and Asexual Reproduction Optimization (ARO) algorithm. We present an experimental evaluation of Pima and mammographic mass dataset that collected from UCI repository. In the small percentage of missing values, those instances can be imputed by the ARO algorithm, but in the case of large amounts, our approach illustrates much better results. This proposed technique works even better when the rate of missing values is higher. The accuracy and computational time of our proposed algorithm are compared with another techniques like Mean, K-Nearest Neighbor, and SVM. On average our approach 8% improved the accuracy and 4% improved the ROC, and it requires less computational time than a basic genetic algorithm.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"9 1","pages":"214-218"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75055910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964838
Sedighe Firuzinia, S. Mirroshandel, F. Ghasemian, Seyed Mahmoodreza Afzali
The morphological evaluation of metaphase II (MII) oocytes before Intra-Cytoplasmic Sperm Injection (ICSI) can help to know and predict their developmental potential, the ICSI outcomes, and transfer the best embryo. The main morphometric features of MII oocytes are the thickness of zona pellucida, the width of perivitelline space, and the area of ooplasm and oocyte. Manual characterization of the MII oocytes can be prone to high inter-observer and intra-observer variability. In this study, we propose a fully automatic algorithm to identify malformations in images of human oocytes. 1500 images of MII oocytes were taken using inverted microscope before the ICSI process to build a dataset, namely the Human MII Oocyte Morphology Analysis Dataset (HMOMA-DS). The three main components of these prepared oocytes are analyzed. As the first step, we eliminated the noise and enhanced the quality of our input image. Further the regions were detected and segmented. Finally, the quality of the oocyte was assessed in terms of measuring the size and area of its main components. We have applied our method to the prepared dataset. It has been able to achieve an accuracy of 98.51% for the thickness of zona pellucida and area of oocyte. The accuracy values for measuring the area of ooplasm and the width of perivitelline space were 99.25% and 91.08%, respectively. The proposed fully automatic method performs effectively before ICSI due to its high accuracy and low computation time. It can help embryologists to select the best-qualified embryo based on the available analyzed parameters from injected oocytes in real-time.
{"title":"An Automatic Method for Morphological Abnormality Detection in Metaphase II Human Oocyte Images","authors":"Sedighe Firuzinia, S. Mirroshandel, F. Ghasemian, Seyed Mahmoodreza Afzali","doi":"10.1109/ICCKE48569.2019.8964838","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964838","url":null,"abstract":"The morphological evaluation of metaphase II (MII) oocytes before Intra-Cytoplasmic Sperm Injection (ICSI) can help to know and predict their developmental potential, the ICSI outcomes, and transfer the best embryo. The main morphometric features of MII oocytes are the thickness of zona pellucida, the width of perivitelline space, and the area of ooplasm and oocyte. Manual characterization of the MII oocytes can be prone to high inter-observer and intra-observer variability. In this study, we propose a fully automatic algorithm to identify malformations in images of human oocytes. 1500 images of MII oocytes were taken using inverted microscope before the ICSI process to build a dataset, namely the Human MII Oocyte Morphology Analysis Dataset (HMOMA-DS). The three main components of these prepared oocytes are analyzed. As the first step, we eliminated the noise and enhanced the quality of our input image. Further the regions were detected and segmented. Finally, the quality of the oocyte was assessed in terms of measuring the size and area of its main components. We have applied our method to the prepared dataset. It has been able to achieve an accuracy of 98.51% for the thickness of zona pellucida and area of oocyte. The accuracy values for measuring the area of ooplasm and the width of perivitelline space were 99.25% and 91.08%, respectively. The proposed fully automatic method performs effectively before ICSI due to its high accuracy and low computation time. It can help embryologists to select the best-qualified embryo based on the available analyzed parameters from injected oocytes in real-time.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"2 3 1","pages":"91-97"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79893418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8965221
Raheleh Rahmati, A. Rasoolzadegan, Diyana Tehrany Dehkordy
Nowadays, an increase in the growth of software systems has risen the importance of the design phase. So far, developers have introduced numerous software design patterns. This study presents a new method to select the Gang of Four (GoF) design patterns. The proposed method is implemented based on the vector space model (VSM). In this method, the Term Frequency-Inverse Document Frequency (TF-IDF) weighting algorithm has been improved to determine the similarity between two texts, more accurately. Also, we used a set of hyponyms and synonyms of the words in weighting. We evaluated the proposed method with 23 design patterns, 29 object-oriented related design problems, and nine real-world problems. Finally, we observed promising results compared to other methods. We found 8.5%, 1.2%, and 5.2% improvement in terms of precision, recall, and accuracy of the proposed method as compared to other methods.
{"title":"An Automated Method for Selecting GoF Design Patterns","authors":"Raheleh Rahmati, A. Rasoolzadegan, Diyana Tehrany Dehkordy","doi":"10.1109/ICCKE48569.2019.8965221","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8965221","url":null,"abstract":"Nowadays, an increase in the growth of software systems has risen the importance of the design phase. So far, developers have introduced numerous software design patterns. This study presents a new method to select the Gang of Four (GoF) design patterns. The proposed method is implemented based on the vector space model (VSM). In this method, the Term Frequency-Inverse Document Frequency (TF-IDF) weighting algorithm has been improved to determine the similarity between two texts, more accurately. Also, we used a set of hyponyms and synonyms of the words in weighting. We evaluated the proposed method with 23 design patterns, 29 object-oriented related design problems, and nine real-world problems. Finally, we observed promising results compared to other methods. We found 8.5%, 1.2%, and 5.2% improvement in terms of precision, recall, and accuracy of the proposed method as compared to other methods.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"44 1","pages":"345-350"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76928765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8965131
Jamshid Mozafari, M. Nematbakhsh, A. Fatemi
In recent years, question and answer systems and information retrieval have been widely used by web users. The purpose of these systems is to find answers to users' questions. These systems consist of several components that the most essential of which is the Answer Selection, which finds the most relevant answer. In related works, the proposed models used lexical features to measure the similarity of sentences, but in recent works, the line of research has changed. They used deep neural networks. In the deep neural networks, early, recurrent neural networks were used due to the sequencing structure of the text, but in state of the art works, convolutional neural networks are used. We represent a new method based on deep neural network algorithms in this research. This method attempts to find the correct answer to a given question from the pool of responses. Our proposed method uses wide convolution instead of narrow convolution, concatenates sparse features vector into feature vector and uses dropout in order to rank candidate answers of the user’s question semantically. The results show a 1.01% improvement at the MAP and a 0.2% improvement at the MRR metrics than the best previous model. The experiments show using context-sensitive interactions between input sentences is useful for finding the best answer.
{"title":"Improved Answer Selection For Factoid Questions","authors":"Jamshid Mozafari, M. Nematbakhsh, A. Fatemi","doi":"10.1109/ICCKE48569.2019.8965131","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8965131","url":null,"abstract":"In recent years, question and answer systems and information retrieval have been widely used by web users. The purpose of these systems is to find answers to users' questions. These systems consist of several components that the most essential of which is the Answer Selection, which finds the most relevant answer. In related works, the proposed models used lexical features to measure the similarity of sentences, but in recent works, the line of research has changed. They used deep neural networks. In the deep neural networks, early, recurrent neural networks were used due to the sequencing structure of the text, but in state of the art works, convolutional neural networks are used. We represent a new method based on deep neural network algorithms in this research. This method attempts to find the correct answer to a given question from the pool of responses. Our proposed method uses wide convolution instead of narrow convolution, concatenates sparse features vector into feature vector and uses dropout in order to rank candidate answers of the user’s question semantically. The results show a 1.01% improvement at the MAP and a 0.2% improvement at the MRR metrics than the best previous model. The experiments show using context-sensitive interactions between input sentences is useful for finding the best answer.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"39 1","pages":"143-148"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91209956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964675
Behnam Farzaneh, A. Ahmed, Emad Alizadeh
The Internet of Things (IoT) is a collection of smart objects that interconnect and exchange data gathered. The IoT includes sensor networks in which the nodes are limited in terms of power consumption, energy usage, and memory. Therefore, a protocol is needed to discover the proper path between the nodes in the least amount of time. The Routing Protocol for Low Power and Lossy Networks (RPL) is specially-designed for IoT and used for routing nodes in Low-Power and Lossy Networks (LLNs). Quality of Service (QoS) in this routing protocol faces some challenges. In QoS based networks, the routing protocol must be able to utilize some criteria during the routing process. Enabling multi-criteria based routing in RPL is proposed in this paper. The well-known VIKOR Multi-Criteria Decision Making (MCDM) used for this goal. Each link of route selects the best parent according to the solution of the VIKOR method. Simulation results show that QoS increased in terms of average Energy Consumption, End-to-End Delay (E2ED), Packet Delivery Ratio (PDR) and Throughput.
物联网(IoT)是智能对象的集合,它们相互连接并交换收集到的数据。物联网包括传感器网络,其中节点在功耗、能源使用和内存方面受到限制。因此,需要一个协议来在最短的时间内发现节点之间的正确路径。RPL (Routing Protocol for Low Power and Lossy Networks)是专为物联网而设计的,用于低功耗损耗网络(Low-Power and Lossy Networks, lln)中的路由节点。这种路由协议的服务质量(QoS)面临一些挑战。在基于QoS的网络中,路由协议必须能够在路由过程中利用一些标准。提出了在RPL中实现基于多准则的路由。著名的VIKOR多标准决策(MCDM)用于实现这一目标。路由的每个链路根据VIKOR方法的解选择最优父节点。仿真结果表明,QoS在平均能耗、端到端延迟(E2ED)、包投递率(PDR)和吞吐量方面都有所提高。
{"title":"MC-RPL: A New Routing Approach based on Multi-Criteria RPL for the Internet of Things","authors":"Behnam Farzaneh, A. Ahmed, Emad Alizadeh","doi":"10.1109/ICCKE48569.2019.8964675","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964675","url":null,"abstract":"The Internet of Things (IoT) is a collection of smart objects that interconnect and exchange data gathered. The IoT includes sensor networks in which the nodes are limited in terms of power consumption, energy usage, and memory. Therefore, a protocol is needed to discover the proper path between the nodes in the least amount of time. The Routing Protocol for Low Power and Lossy Networks (RPL) is specially-designed for IoT and used for routing nodes in Low-Power and Lossy Networks (LLNs). Quality of Service (QoS) in this routing protocol faces some challenges. In QoS based networks, the routing protocol must be able to utilize some criteria during the routing process. Enabling multi-criteria based routing in RPL is proposed in this paper. The well-known VIKOR Multi-Criteria Decision Making (MCDM) used for this goal. Each link of route selects the best parent according to the solution of the VIKOR method. Simulation results show that QoS increased in terms of average Energy Consumption, End-to-End Delay (E2ED), Packet Delivery Ratio (PDR) and Throughput.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"1 1","pages":"420-425"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90016727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8964967
Kambiz Vahedi, M. Abbaspour, Khadijeh Afhamisisi, Mohammad Rashidnejad
Recent metamorphic malware detection methods based on statistical analysis of malware code and measuring similarity between codes are by far more superior compared with signature-based detection methods; yet, lacking against code obfuscation methods including insertion of garbage codes similar to benign files and replacing instructions with equivalent instructions. This paper proposes a method on improved detection of metamorphic malwares based on activity and behavior analysis of executable files. The process involves two stages: initially, behavior of the file is analyzed during runtime and the behavioral pattern is obtained; then, in the second stage, behavioral patterns of the malware files are compared with the sample file in order to determine the level of similarity. The stage on analyzing behavior of the file is accomplished in a monitored environment and then malicious behavioral features of the file are extracted. The second stage involves determining level of similarity between malwares registered into the database in the first stage and the sample files. The obtained experimental results show that the proposed method, by determining the similarity level of behavioral patterns, significantly improves detection of metamorphic malwares and along with no false positives.
{"title":"Behavioral Entropy Towards Detection of Metamorphic Malwares","authors":"Kambiz Vahedi, M. Abbaspour, Khadijeh Afhamisisi, Mohammad Rashidnejad","doi":"10.1109/ICCKE48569.2019.8964967","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8964967","url":null,"abstract":"Recent metamorphic malware detection methods based on statistical analysis of malware code and measuring similarity between codes are by far more superior compared with signature-based detection methods; yet, lacking against code obfuscation methods including insertion of garbage codes similar to benign files and replacing instructions with equivalent instructions. This paper proposes a method on improved detection of metamorphic malwares based on activity and behavior analysis of executable files. The process involves two stages: initially, behavior of the file is analyzed during runtime and the behavioral pattern is obtained; then, in the second stage, behavioral patterns of the malware files are compared with the sample file in order to determine the level of similarity. The stage on analyzing behavior of the file is accomplished in a monitored environment and then malicious behavioral features of the file are extracted. The second stage involves determining level of similarity between malwares registered into the database in the first stage and the sample files. The obtained experimental results show that the proposed method, by determining the similarity level of behavioral patterns, significantly improves detection of metamorphic malwares and along with no false positives.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"22 1","pages":"78-84"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90834439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ICCKE48569.2019.8965206
Mahdieh Dehghani, A. Kamandi, M. Shabankhah, A. Moeini
Association rule mining, one of the most important branches of data mining, which focused on detecting frequent patterns of itemsets. Apriori is the first algorithm proposed for association rule mining. This algorithm has the best response and can detect all frequent itemsets from transaction databases. Apriori is of time complexity order two to the power n at worst case, n is the number of items in the database. At each step, the database is scanned to detect frequent itemsets. As a result, this algorithm has a very large response time for large databases. There are two ways to reduce the response time of this algorithm. First, prune the itemsets which candidate for checking. Second, reduce the dimension of the database. We used the second solution and reduce the dimension of the database considering that if a set is frequent, all of its subsets are frequent with more frequencies in the database. In the proposed algorithm, database scanned one time, and then frequent itemsets are detected by the reduced database. Our algorithm improved an apriori response time. To evaluate the algorithm, precision and recall measures have been used. According to the experimental in most cases, the algorithm can provide precision and recall above ninety percent.
{"title":"Toward a Distinguishing Approach for Improving the Apriori Algorithm","authors":"Mahdieh Dehghani, A. Kamandi, M. Shabankhah, A. Moeini","doi":"10.1109/ICCKE48569.2019.8965206","DOIUrl":"https://doi.org/10.1109/ICCKE48569.2019.8965206","url":null,"abstract":"Association rule mining, one of the most important branches of data mining, which focused on detecting frequent patterns of itemsets. Apriori is the first algorithm proposed for association rule mining. This algorithm has the best response and can detect all frequent itemsets from transaction databases. Apriori is of time complexity order two to the power n at worst case, n is the number of items in the database. At each step, the database is scanned to detect frequent itemsets. As a result, this algorithm has a very large response time for large databases. There are two ways to reduce the response time of this algorithm. First, prune the itemsets which candidate for checking. Second, reduce the dimension of the database. We used the second solution and reduce the dimension of the database considering that if a set is frequent, all of its subsets are frequent with more frequencies in the database. In the proposed algorithm, database scanned one time, and then frequent itemsets are detected by the reduced database. Our algorithm improved an apriori response time. To evaluate the algorithm, precision and recall measures have been used. According to the experimental in most cases, the algorithm can provide precision and recall above ninety percent.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"48 1","pages":"309-314"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90394534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}