Pub Date : 2020-10-28DOI: 10.1109/UEMCON51285.2020.9298174
N. Madathil, S. Harous
Distributed statistical learning algorithms are performing many machine learning tasks in a distributed environment. Some scenarios where data sharing is desired among many parties and it may need to increase the efficiency and statistical accuracy of the underlying algorithms. Due to the increase in the size and complexity of today’s big data, it is very important to solve problems with a very large number of features, records, and training samples. As a result, it is necessary to deal with the distributed transfer of these datasets as well as their underlying distributed solution methods efficiently and effectively. This paper compares the efficiency and accuracy of a distributed statistical method with a central method with simple regression and classification algorithms.
{"title":"Central versus Distributed Statistical Computing Algorithms-A Comparison","authors":"N. Madathil, S. Harous","doi":"10.1109/UEMCON51285.2020.9298174","DOIUrl":"https://doi.org/10.1109/UEMCON51285.2020.9298174","url":null,"abstract":"Distributed statistical learning algorithms are performing many machine learning tasks in a distributed environment. Some scenarios where data sharing is desired among many parties and it may need to increase the efficiency and statistical accuracy of the underlying algorithms. Due to the increase in the size and complexity of today’s big data, it is very important to solve problems with a very large number of features, records, and training samples. As a result, it is necessary to deal with the distributed transfer of these datasets as well as their underlying distributed solution methods efficiently and effectively. This paper compares the efficiency and accuracy of a distributed statistical method with a central method with simple regression and classification algorithms.","PeriodicalId":433609,"journal":{"name":"2020 11th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130559854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-28DOI: 10.1109/UEMCON51285.2020.9298102
Nafiz Sadman, M. Ahsan, M. Mahmud
Code review is one of the crucial steps in the software development process. Despite having many experts, assigning the appropriate one is often challenging, time-consuming, and inefficient for industrial developers and researchers who demand instant solutions. An automated code review system can serve as a proficient and alternative opportunity for those necessities. This paper aims to identify appropriate reviewers for a selected task based on data analysis using Natural Language Processing (NLP) techniques. Appropriate Developer for Code Review (ADCR) is proposed taking into account a set of data that comprises reviewers’ information—responsiveness, experience, and acquaintanceship—benefits of the proposed methods including unbiased review accountability and the early feed-back opportunity for the developers. Additionally, a tool is developed to process the automated review and speed up the development cycles.
{"title":"ADCR: An Adaptive TOOL to select ”Appropriate Developer for Code Review” based on Code Context","authors":"Nafiz Sadman, M. Ahsan, M. Mahmud","doi":"10.1109/UEMCON51285.2020.9298102","DOIUrl":"https://doi.org/10.1109/UEMCON51285.2020.9298102","url":null,"abstract":"Code review is one of the crucial steps in the software development process. Despite having many experts, assigning the appropriate one is often challenging, time-consuming, and inefficient for industrial developers and researchers who demand instant solutions. An automated code review system can serve as a proficient and alternative opportunity for those necessities. This paper aims to identify appropriate reviewers for a selected task based on data analysis using Natural Language Processing (NLP) techniques. Appropriate Developer for Code Review (ADCR) is proposed taking into account a set of data that comprises reviewers’ information—responsiveness, experience, and acquaintanceship—benefits of the proposed methods including unbiased review accountability and the early feed-back opportunity for the developers. Additionally, a tool is developed to process the automated review and speed up the development cycles.","PeriodicalId":433609,"journal":{"name":"2020 11th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123107528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-28DOI: 10.1109/UEMCON51285.2020.9298149
Logan Eisenbeiser
Artificial music generation is a rapidly developing field focused on the complex task of creating neural networks that can produce realistic-sounding music. As computer-generated music improves in quality, it has potential to revolutionize the multi-billion dollar music industry by providing additional tools to musicians as well as creating new music for consumers. Beyond simply generating music lies the challenge of controlling or conditioning that generation. Conditional generation can be used to specify a tempo for the generated song, increase the density of notes, or even change the genre. Latent walking is one of the most popular techniques for conditional image generation, but its effectiveness on music-domain generation is largely unexplored, especially for generative adversarial networks (GANs). In this paper, latent walking is implemented with the MuseGAN generator to successfully control two semantic values: note count and polyphonicity (when more than one note is played at a time). This shows that latent walking is a viable technique for GANs in the music domain and can be used to improve the quality, among other features, of the generated music.
{"title":"Latent Walking Techniques for Conditioning GAN-Generated Music","authors":"Logan Eisenbeiser","doi":"10.1109/UEMCON51285.2020.9298149","DOIUrl":"https://doi.org/10.1109/UEMCON51285.2020.9298149","url":null,"abstract":"Artificial music generation is a rapidly developing field focused on the complex task of creating neural networks that can produce realistic-sounding music. As computer-generated music improves in quality, it has potential to revolutionize the multi-billion dollar music industry by providing additional tools to musicians as well as creating new music for consumers. Beyond simply generating music lies the challenge of controlling or conditioning that generation. Conditional generation can be used to specify a tempo for the generated song, increase the density of notes, or even change the genre. Latent walking is one of the most popular techniques for conditional image generation, but its effectiveness on music-domain generation is largely unexplored, especially for generative adversarial networks (GANs). In this paper, latent walking is implemented with the MuseGAN generator to successfully control two semantic values: note count and polyphonicity (when more than one note is played at a time). This shows that latent walking is a viable technique for GANs in the music domain and can be used to improve the quality, among other features, of the generated music.","PeriodicalId":433609,"journal":{"name":"2020 11th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127050238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-28DOI: 10.1109/UEMCON51285.2020.9298028
Ruchi Bagwe, K. George
The table base numerical question-answering task requires a mechanism to understand the relation between table content and numbers (present in table and question). It also needs an efficient method to address complex reasoning on table context. Most of the existing approaches in the natural language processing technology address the context-based questions on the table but fail to address the numerical reasoning part. They are also built on a large search database, which makes it challenging to use them in multiple domains. These approaches use pre-trained models like BERT to perform context encoding of a complete table. Hence these models fail when a large table is provided as input, as full table encoding is a very resource and time-consuming task. In this paper, a framework is proposed to answer questions on the table with numerical reasoning. This framework uses a context-snapshot mechanism to filter irrelevant table rows before tokenizing the table content. The filtered context and tokenized question are converted into vector representation using a pre-trained BERT model. This proposed model finds the correlation between the tokenized context-snapshot and numbers in question using graph neural networks. Further, it uses a feed-forward neural network to perform the numerical operation to compute the answer. The model is trained and evaluated on WikiTableQuestions datasets, shows a promising result.
{"title":"Automatic Numerical Question Answering on Table using BERT-GNN","authors":"Ruchi Bagwe, K. George","doi":"10.1109/UEMCON51285.2020.9298028","DOIUrl":"https://doi.org/10.1109/UEMCON51285.2020.9298028","url":null,"abstract":"The table base numerical question-answering task requires a mechanism to understand the relation between table content and numbers (present in table and question). It also needs an efficient method to address complex reasoning on table context. Most of the existing approaches in the natural language processing technology address the context-based questions on the table but fail to address the numerical reasoning part. They are also built on a large search database, which makes it challenging to use them in multiple domains. These approaches use pre-trained models like BERT to perform context encoding of a complete table. Hence these models fail when a large table is provided as input, as full table encoding is a very resource and time-consuming task. In this paper, a framework is proposed to answer questions on the table with numerical reasoning. This framework uses a context-snapshot mechanism to filter irrelevant table rows before tokenizing the table content. The filtered context and tokenized question are converted into vector representation using a pre-trained BERT model. This proposed model finds the correlation between the tokenized context-snapshot and numbers in question using graph neural networks. Further, it uses a feed-forward neural network to perform the numerical operation to compute the answer. The model is trained and evaluated on WikiTableQuestions datasets, shows a promising result.","PeriodicalId":433609,"journal":{"name":"2020 11th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132803272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-28DOI: 10.1109/UEMCON51285.2020.9298181
Qi Wang, Xianping Wang
As a user-friendly human-computer interaction approach, EMG is regarded as one of the most promising modalities for hand gesture recognition. Though EMG-based hand gesture recognition has been advanced in recent years, to effective detect the patterns from the noisy EMG signal, more advanced algorithms are still highly necessary. Convolutional neural network (CNN) is a popular deep learning algorithm and its unique architecture has gained a great success in the image processing area. In this study, we propose a new deep learning framework for hand gesture recognition from the multi-session EMG signal. In the data representation stage, we also transform the time domain EMG signal to the time-frequency domain by short-term Fourier transform (STFT) to get more time-varying frequency characteristics. Our experiment shows that the proposed framework can effectively detect hand gestures from the multi-session EMG data. This work will greatly advance the hand gesture recognition.
{"title":"EMG-based Hand Gesture Recognition by Deep Time-frequency Learning for Assisted Living & Rehabilitation","authors":"Qi Wang, Xianping Wang","doi":"10.1109/UEMCON51285.2020.9298181","DOIUrl":"https://doi.org/10.1109/UEMCON51285.2020.9298181","url":null,"abstract":"As a user-friendly human-computer interaction approach, EMG is regarded as one of the most promising modalities for hand gesture recognition. Though EMG-based hand gesture recognition has been advanced in recent years, to effective detect the patterns from the noisy EMG signal, more advanced algorithms are still highly necessary. Convolutional neural network (CNN) is a popular deep learning algorithm and its unique architecture has gained a great success in the image processing area. In this study, we propose a new deep learning framework for hand gesture recognition from the multi-session EMG signal. In the data representation stage, we also transform the time domain EMG signal to the time-frequency domain by short-term Fourier transform (STFT) to get more time-varying frequency characteristics. Our experiment shows that the proposed framework can effectively detect hand gestures from the multi-session EMG data. This work will greatly advance the hand gesture recognition.","PeriodicalId":433609,"journal":{"name":"2020 11th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"74 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114013174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study is an attempt to understand and address the mental health issue, of working professionals through facial expression recognition. As a society, we are all currently talking about ways as to how a person who is suffering from any emotional issue can adopt certain ways to come out of a specific circumstance and how we as a society can support such people in these situations.Our endeavor is to work on a way where the identification of such persons who are going through a difficult phase in their life can be performed. It is not always evident that a person going through a tough phase may open up about their feelings to people around them and hence making use of AI/ML to identify a person’s emotion through their facial expressions captured over a span of time thereby recommending them some activities, thoughts which can help them in getting over their emotions when they are sad, fearful or else will address the problem to some extent.
{"title":"Facial Expression Recognition and Recommendations Using Deep Neural Network with Transfer Learning","authors":"Narayana Darapaneni, Rahul Choubey, Pratik Salvi, Ankur Pathak, Sajal Suryavanshi, A. Paduri","doi":"10.1109/UEMCON51285.2020.9298082","DOIUrl":"https://doi.org/10.1109/UEMCON51285.2020.9298082","url":null,"abstract":"This study is an attempt to understand and address the mental health issue, of working professionals through facial expression recognition. As a society, we are all currently talking about ways as to how a person who is suffering from any emotional issue can adopt certain ways to come out of a specific circumstance and how we as a society can support such people in these situations.Our endeavor is to work on a way where the identification of such persons who are going through a difficult phase in their life can be performed. It is not always evident that a person going through a tough phase may open up about their feelings to people around them and hence making use of AI/ML to identify a person’s emotion through their facial expressions captured over a span of time thereby recommending them some activities, thoughts which can help them in getting over their emotions when they are sad, fearful or else will address the problem to some extent.","PeriodicalId":433609,"journal":{"name":"2020 11th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131863336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-28DOI: 10.1109/UEMCON51285.2020.9298070
Tian Lan, Yuxin Qian, Wenxin Tai, Boce Chu, Qiao Liu
Deep attractor network (DANet) is a recent deep learning-based method for monaural speech separation. The idea is to map the time-frequency bins from the spectrogram to the embedding space and form attractors for each source to estimate masks. The original deep attractor network uses true assignments of speaker to form attractors during training, but K-means algorithm or fixed attractor method is used during the test phase to estimate attractors. The fixed attractor method does not perform well when training and test condition is different. Using K-means algorithm during test raises a center mismatch problem, which leads to performance degradation. In this letter, we propose to use convolutional networks for estimating attractors in the training and test phases. By using the same method to generate attractors, the center mismatch problem is solved. Results revealed that the proposed method achieves better performance than DANet using K-means method and gets comparable performance with DANet using ideal binary mask during test with limited training data.
{"title":"Deep Attractor with Convolutional Network for Monaural Speech Separation","authors":"Tian Lan, Yuxin Qian, Wenxin Tai, Boce Chu, Qiao Liu","doi":"10.1109/UEMCON51285.2020.9298070","DOIUrl":"https://doi.org/10.1109/UEMCON51285.2020.9298070","url":null,"abstract":"Deep attractor network (DANet) is a recent deep learning-based method for monaural speech separation. The idea is to map the time-frequency bins from the spectrogram to the embedding space and form attractors for each source to estimate masks. The original deep attractor network uses true assignments of speaker to form attractors during training, but K-means algorithm or fixed attractor method is used during the test phase to estimate attractors. The fixed attractor method does not perform well when training and test condition is different. Using K-means algorithm during test raises a center mismatch problem, which leads to performance degradation. In this letter, we propose to use convolutional networks for estimating attractors in the training and test phases. By using the same method to generate attractors, the center mismatch problem is solved. Results revealed that the proposed method achieves better performance than DANet using K-means method and gets comparable performance with DANet using ideal binary mask during test with limited training data.","PeriodicalId":433609,"journal":{"name":"2020 11th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134128086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper provides an approach to a new encryption architecture using double layer encryption standards for the existing secret sharing methodology. The image encryption standard in this scheme deals with both gray scale and color images and provides the experimental results. The paper deals with the transmission of multimedia such as images over insecure and secure networks, secret sharing helps to mask the image from the attacker by breaking it down to shares which are not at all related in the sense of content to the original image and provide the security of only reconstructing the original image when the client has all the shares.
{"title":"Enhanced Security Architecture for Visual Cryptography Based on Image Secret Sharing","authors":"Manas Abhilash Gundapuneni, Anzum Bano, Navjot Singh","doi":"10.1109/UEMCON51285.2020.9298166","DOIUrl":"https://doi.org/10.1109/UEMCON51285.2020.9298166","url":null,"abstract":"This paper provides an approach to a new encryption architecture using double layer encryption standards for the existing secret sharing methodology. The image encryption standard in this scheme deals with both gray scale and color images and provides the experimental results. The paper deals with the transmission of multimedia such as images over insecure and secure networks, secret sharing helps to mask the image from the attacker by breaking it down to shares which are not at all related in the sense of content to the original image and provide the security of only reconstructing the original image when the client has all the shares.","PeriodicalId":433609,"journal":{"name":"2020 11th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"211 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133878045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-28DOI: 10.1109/UEMCON51285.2020.9298178
T. A. Wanigaaratchi, V. T. N. Vidanagama
there are many intelligent systems and tools which uses highly efficient processing models to identify different anomalies with high accuracy. The anomaly detection is of high importance and mostly will come as an absolute requirement at high risk environments and situations. The amount of processing involved in quick decision taking systems bare high deployment costs which restricts the anomaly detection only to a selected few who are capable of building such resource centered systems. Modern world uses drones and other video feeds in order to find and keep track of any anomalous events around a specific area. But most such detection requires absolute manual attention as well as processing power to keep up with real time detection and recognition. The proposed research solution aims to automate this process and includes a two-step anomaly detection system which gives a quicker anomaly detection in an average processing unit time with an advanced recognition model with up to 90% accuracy. The deep learning model (VGG 16) together with alert system and comparison techniques on videos leads into unsupervised anomaly detection of a landscape. The system generates alerts and recognizes anomalies on the alerted video frames. The proposed solution can also be used by any source and does not require high capacity of capability system to get the optimal output. Moreover, the solution brings a simple yet sophisticated technique to address modern anomaly detection and quick alerting system.
{"title":"Anomaly Detection and Identification Using Visual Techniques in Streaming Video","authors":"T. A. Wanigaaratchi, V. T. N. Vidanagama","doi":"10.1109/UEMCON51285.2020.9298178","DOIUrl":"https://doi.org/10.1109/UEMCON51285.2020.9298178","url":null,"abstract":"there are many intelligent systems and tools which uses highly efficient processing models to identify different anomalies with high accuracy. The anomaly detection is of high importance and mostly will come as an absolute requirement at high risk environments and situations. The amount of processing involved in quick decision taking systems bare high deployment costs which restricts the anomaly detection only to a selected few who are capable of building such resource centered systems. Modern world uses drones and other video feeds in order to find and keep track of any anomalous events around a specific area. But most such detection requires absolute manual attention as well as processing power to keep up with real time detection and recognition. The proposed research solution aims to automate this process and includes a two-step anomaly detection system which gives a quicker anomaly detection in an average processing unit time with an advanced recognition model with up to 90% accuracy. The deep learning model (VGG 16) together with alert system and comparison techniques on videos leads into unsupervised anomaly detection of a landscape. The system generates alerts and recognizes anomalies on the alerted video frames. The proposed solution can also be used by any source and does not require high capacity of capability system to get the optimal output. Moreover, the solution brings a simple yet sophisticated technique to address modern anomaly detection and quick alerting system.","PeriodicalId":433609,"journal":{"name":"2020 11th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133119594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-28DOI: 10.1109/UEMCON51285.2020.9298095
Shreehar Joshi, Eman Abdelfattah
The Internet of Things, with its enormous growth in the recent decades, has not just brought convenience to the different aspects of our lives. It has also increased the risks of various forms of cybercriminal attacks, ranging from personal information theft to the disruption of the entire network of a service provider. As the demands of such devices increase rapidly on a global scale, it has become increasingly difficult for different corporations to focus on security efficiently. As such, the demand for methodologies that can aptly respond to prevent intrusion within a network has soared disturbingly. Various utilization of anomaly traffic detection techniques has been conducted in the past, all with the similar aim to prevent disruption in networks. This research aims to find an efficient classifier that detects anomaly traffic from N_BaIoT dataset with the highest overall precision and recall by experimenting with four machine learning techniques. Four binary classifiers: Decision Trees, Extra Trees Classifiers, Random Forests, and Support Vector Machines are tested and validated to produce the result. The outcome demonstrates that all the classifiers perform exceptionally well when used to train and test the anomaly within a single device. Moreover, Random Forests classifier outperforms all others when training is done on a particular device to test the anomaly on completely unrelated devices.
{"title":"Efficiency of Different Machine Learning Algorithms on the Multivariate Classification of IoT Botnet Attacks","authors":"Shreehar Joshi, Eman Abdelfattah","doi":"10.1109/UEMCON51285.2020.9298095","DOIUrl":"https://doi.org/10.1109/UEMCON51285.2020.9298095","url":null,"abstract":"The Internet of Things, with its enormous growth in the recent decades, has not just brought convenience to the different aspects of our lives. It has also increased the risks of various forms of cybercriminal attacks, ranging from personal information theft to the disruption of the entire network of a service provider. As the demands of such devices increase rapidly on a global scale, it has become increasingly difficult for different corporations to focus on security efficiently. As such, the demand for methodologies that can aptly respond to prevent intrusion within a network has soared disturbingly. Various utilization of anomaly traffic detection techniques has been conducted in the past, all with the similar aim to prevent disruption in networks. This research aims to find an efficient classifier that detects anomaly traffic from N_BaIoT dataset with the highest overall precision and recall by experimenting with four machine learning techniques. Four binary classifiers: Decision Trees, Extra Trees Classifiers, Random Forests, and Support Vector Machines are tested and validated to produce the result. The outcome demonstrates that all the classifiers perform exceptionally well when used to train and test the anomaly within a single device. Moreover, Random Forests classifier outperforms all others when training is done on a particular device to test the anomaly on completely unrelated devices.","PeriodicalId":433609,"journal":{"name":"2020 11th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125869362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}