Pub Date : 2020-12-19DOI: 10.5121/csit.2020.101914
A. Maignan, Tony C. Scott
Quantum clustering (QC), is a data clustering algorithm based on quantum mechanics which is accomplished by substituting each point in a given dataset with a Gaussian. The width of the Gaussian is a 𝜎 value, a hyper-parameter which can be manually defined and manipulated to suit the application. Numerical methods are used to find all the minima of the quantum potential as they correspond to cluster centers. Herein, we investigate the mathematical task of expressing and finding all the roots of the exponential polynomial corresponding to the minima of a two-dimensional quantum potential. This is an outstanding task because normally such expressions are impossible to solve analytically. However, we prove that if the points are all included in a square region of size 𝜎, there is only one minimum. This bound is not only useful in the number of solutions to look for, by numerical means, it allows to to propose a new numerical approach “per block”. This technique decreases the number of particles (or samples) by approximating some groups of particles to weighted particles. These findings are not only useful to the quantum clustering problem but also for the exponential polynomials encountered in quantum chemistry, Solid-state Physics and other applications.
{"title":"Quantum Clustering Analysis: Minima of the Potential Energy Function","authors":"A. Maignan, Tony C. Scott","doi":"10.5121/csit.2020.101914","DOIUrl":"https://doi.org/10.5121/csit.2020.101914","url":null,"abstract":"Quantum clustering (QC), is a data clustering algorithm based on quantum mechanics which is accomplished by substituting each point in a given dataset with a Gaussian. The width of the Gaussian is a 𝜎 value, a hyper-parameter which can be manually defined and manipulated to suit the application. Numerical methods are used to find all the minima of the quantum potential as they correspond to cluster centers. Herein, we investigate the mathematical task of expressing and finding all the roots of the exponential polynomial corresponding to the minima of a two-dimensional quantum potential. This is an outstanding task because normally such expressions are impossible to solve analytically. However, we prove that if the points are all included in a square region of size 𝜎, there is only one minimum. This bound is not only useful in the number of solutions to look for, by numerical means, it allows to to propose a new numerical approach “per block”. This technique decreases the number of particles (or samples) by approximating some groups of particles to weighted particles. These findings are not only useful to the quantum clustering problem but also for the exponential polynomials encountered in quantum chemistry, Solid-state Physics and other applications.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45380088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-19DOI: 10.5121/csit.2020.101903
Nikola Banić, Karlo Koščević, M. Subašić, S. Lončarić
Computational color constancy is used in almost all digital cameras to reduce the influence of scene illumination on object colors. Many of the highly accurate published illumination estimation methods use deep learning, which relies on large amounts of images with known ground-truth illuminations. Since the size of the appropriate publicly available training datasets is relatively small, data augmentation is often used also by simulating the appearance of a given image under another illumination. Still, there are practically no reports on any desired properties of such simulated images or on the limits of their usability. In this paper, several experiments for determining some of these properties are proposed and conducted by comparing the behavior of the simplest illumination estimation methods on images of the same scenes obtained under real illuminations and images obtained through data augmentation. The experimental results are presented and discussed.
{"title":"On Some Desired Properties of Data Augmentation by Illumination Simulation for Color Constancy","authors":"Nikola Banić, Karlo Koščević, M. Subašić, S. Lončarić","doi":"10.5121/csit.2020.101903","DOIUrl":"https://doi.org/10.5121/csit.2020.101903","url":null,"abstract":"Computational color constancy is used in almost all digital cameras to reduce the influence of scene illumination on object colors. Many of the highly accurate published illumination estimation methods use deep learning, which relies on large amounts of images with known ground-truth illuminations. Since the size of the appropriate publicly available training datasets is relatively small, data augmentation is often used also by simulating the appearance of a given image under another illumination. Still, there are practically no reports on any desired properties of such simulated images or on the limits of their usability. In this paper, several experiments for determining some of these properties are proposed and conducted by comparing the behavior of the simplest illumination estimation methods on images of the same scenes obtained under real illuminations and images obtained through data augmentation. The experimental results are presented and discussed.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41487025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-19DOI: 10.5121/csit.2020.101913
M. Yamauchi, K. Nakano, Yoshiya Tanaka, K. Horio
In this article, we implemented a regression model and conducted experiments for predicting disease activity using data from 1929 rheumatoid arthritis patients to assist in the selection of biologics for rheumatoid arthritis. On modelling, the missing variables in the data were completed by three different methods, mean value, self-organizing map and random value. Experimental results showed that the prediction error of the regression model was large regardless of the missing completion method, making it difficult to predict the prognosis of rheumatoid arthritis patients.
{"title":"Predicting Disease Activity for Biologic Selection in Rheumatoid Arthritis","authors":"M. Yamauchi, K. Nakano, Yoshiya Tanaka, K. Horio","doi":"10.5121/csit.2020.101913","DOIUrl":"https://doi.org/10.5121/csit.2020.101913","url":null,"abstract":"In this article, we implemented a regression model and conducted experiments for predicting disease activity using data from 1929 rheumatoid arthritis patients to assist in the selection of biologics for rheumatoid arthritis. On modelling, the missing variables in the data were completed by three different methods, mean value, self-organizing map and random value. Experimental results showed that the prediction error of the regression model was large regardless of the missing completion method, making it difficult to predict the prognosis of rheumatoid arthritis patients.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41511787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-18DOI: 10.5121/csit.2020.101802
Gilbert Busolo, L. Nderu, Kennedy Ogada
Knowledge is a strategic resource for successful data driven decision making in any organization. To harness this knowledge, successful adoption of a technological intervention is key. Institutions leverage on technology to drive knowledge management (KM) initiatives for quality service delivery and prudent data management. These initiatives provide the overall strategy for managing data resources. They make available knowledge organization tools and techniques while enabling regular updates. Derived benefits of positive deployment of a technological intervention are competency enhancement through gained knowledge, raised quality of service and promotion of healthy development of e-commerce. Successful and timely adoption of technological interventions through which knowledge management initiatives are deployed remains a key challenge to many organizations. This paper proposes a wholesome multilevel technology acceptance management model. The proposed model takes into account human, technological and organizational variables, which exist in a deployment environment. This model will be vital in driving early technology acceptance prediction and timely deployment of mitigation measures to deploy technological interventions successfully.
{"title":"A Multilevel Technology Acceptance Management Model","authors":"Gilbert Busolo, L. Nderu, Kennedy Ogada","doi":"10.5121/csit.2020.101802","DOIUrl":"https://doi.org/10.5121/csit.2020.101802","url":null,"abstract":"Knowledge is a strategic resource for successful data driven decision making in any organization. To harness this knowledge, successful adoption of a technological intervention is key. Institutions leverage on technology to drive knowledge management (KM) initiatives for quality service delivery and prudent data management. These initiatives provide the overall strategy for managing data resources. They make available knowledge organization tools and techniques while enabling regular updates. Derived benefits of positive deployment of a technological intervention are competency enhancement through gained knowledge, raised quality of service and promotion of healthy development of e-commerce. Successful and timely adoption of technological interventions through which knowledge management initiatives are deployed remains a key challenge to many organizations. This paper proposes a wholesome multilevel technology acceptance management model. The proposed model takes into account human, technological and organizational variables, which exist in a deployment environment. This model will be vital in driving early technology acceptance prediction and timely deployment of mitigation measures to deploy technological interventions successfully.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42396351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-18DOI: 10.5121/csit.2020.101804
Shiyuan Zhang, Evan Gunnell, Marisabel Chang, Yu Sun
As more students are required to have standardized test scores to enter higher education, developing vocabulary becomes essential for achieving ideal scores. Each individual has his or her own study style that maximizes the efficiency, and there are various approaches to memorize. However, it is difficult to find a specific learning method that fits the best to a person. This paper designs a tool to customize personal study plans based on clients’ different habits including difficulty distribution, difficulty order of learning words, and the types of vocabulary. We applied our application to educational software and conducted a quantitative evaluation of the approach via three types of machine learning models. By calculating cross-validation scores, we evaluated the accuracy of each model and discovered the best model that returns the most accurate predictions. The results reveal that linear regression has the highest cross validation score, and it can provide the most efficient personal study plans.
{"title":"An Intellectual Approach to Design Personal Study Plan via Machine Learning","authors":"Shiyuan Zhang, Evan Gunnell, Marisabel Chang, Yu Sun","doi":"10.5121/csit.2020.101804","DOIUrl":"https://doi.org/10.5121/csit.2020.101804","url":null,"abstract":"As more students are required to have standardized test scores to enter higher education, developing vocabulary becomes essential for achieving ideal scores. Each individual has his or her own study style that maximizes the efficiency, and there are various approaches to memorize. However, it is difficult to find a specific learning method that fits the best to a person. This paper designs a tool to customize personal study plans based on clients’ different habits including difficulty distribution, difficulty order of learning words, and the types of vocabulary. We applied our application to educational software and conducted a quantitative evaluation of the approach via three types of machine learning models. By calculating cross-validation scores, we evaluated the accuracy of each model and discovered the best model that returns the most accurate predictions. The results reveal that linear regression has the highest cross validation score, and it can provide the most efficient personal study plans.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44120232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-18DOI: 10.5121/csit.2020.101805
Huihui He, Yongjun Wang
Due to the interactivity of stateful network protocol, network protocol fuzzing has higher blindness and lower testcase validity. The existing blackbox-based fuzzing has the disadvantages of high randomness and blindness. The manual description of protocol specification which requires more expert knowledge, is tedious and does not support the protocol without public document, which limits the effect of current network protocol fuzzer. In this paper, we present PNFUZZ, a fuzzer that adopts the state inference based on packet clustering algorithm and coverage oriented mutation strategy. We train a clustering model through the target protocol packet, and use the model to identify the server’s protocol state, thereby optimizing the process of testcase generation. The experimental results show that the proposed approach has a certain improvement in fuzzing effect.
{"title":"PNFUZZ: A Stateful Network Protocol Fuzzing Approach Based on Packet Clustering","authors":"Huihui He, Yongjun Wang","doi":"10.5121/csit.2020.101805","DOIUrl":"https://doi.org/10.5121/csit.2020.101805","url":null,"abstract":"Due to the interactivity of stateful network protocol, network protocol fuzzing has higher blindness and lower testcase validity. The existing blackbox-based fuzzing has the disadvantages of high randomness and blindness. The manual description of protocol specification which requires more expert knowledge, is tedious and does not support the protocol without public document, which limits the effect of current network protocol fuzzer. In this paper, we present PNFUZZ, a fuzzer that adopts the state inference based on packet clustering algorithm and coverage oriented mutation strategy. We train a clustering model through the target protocol packet, and use the model to identify the server’s protocol state, thereby optimizing the process of testcase generation. The experimental results show that the proposed approach has a certain improvement in fuzzing effect.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43160064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-18DOI: 10.5121/csit.2020.101809
Marie-Anne Xu, Rahul Khanna
Recent progress in machine reading comprehension and question-answering has allowed machines to reach and even surpass human question-answering. However, the majority of these questions have only one answer, and more substantial testing on questions with multiple answers, or multi-span questions, has not yet been applied. Thus, we introduce a newly compiled dataset consisting of questions with multiple answers that originate from previously existing datasets. In addition, we run BERT-based models pre-trained for question-answering on our constructed dataset to evaluate their reading comprehension abilities. Among the three of BERT-based models we ran, RoBERTa exhibits the highest consistent performance, regardless of size. We find that all our models perform similarly on this new, multi-span dataset (21.492% F1) compared to the single-span source datasets (~33.36% F1). While the models tested on the source datasets were slightly fine-tuned, performance is similar enough to judge that task formulation does not drastically affect question-answering abilities. Our evaluations indicate that these models are indeed capable of adjusting to answer questions that require multiple answers. We hope that our findings will assist future development in questionanswering and improve existing question-answering products and methods.
{"title":"Importance of the Single-Span Task Formulation to Extractive Question-answering","authors":"Marie-Anne Xu, Rahul Khanna","doi":"10.5121/csit.2020.101809","DOIUrl":"https://doi.org/10.5121/csit.2020.101809","url":null,"abstract":"Recent progress in machine reading comprehension and question-answering has allowed machines to reach and even surpass human question-answering. However, the majority of these questions have only one answer, and more substantial testing on questions with multiple answers, or multi-span questions, has not yet been applied. Thus, we introduce a newly compiled dataset consisting of questions with multiple answers that originate from previously existing datasets. In addition, we run BERT-based models pre-trained for question-answering on our constructed dataset to evaluate their reading comprehension abilities. Among the three of BERT-based models we ran, RoBERTa exhibits the highest consistent performance, regardless of size. We find that all our models perform similarly on this new, multi-span dataset (21.492% F1) compared to the single-span source datasets (~33.36% F1). While the models tested on the source datasets were slightly fine-tuned, performance is similar enough to judge that task formulation does not drastically affect question-answering abilities. Our evaluations indicate that these models are indeed capable of adjusting to answer questions that require multiple answers. We hope that our findings will assist future development in questionanswering and improve existing question-answering products and methods.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42684366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-18DOI: 10.5121/csit.2020.101810
Oluseyi Olarewaju, A. Kokkinakis, Simon Demediuk, Justus Roberstson, Isabelle Nölle, Sagarika Patra, Daniel Slawson, A. Chitayat, Alistair Coates, B. Kirman, Anders Drachen, M. Ursu, Florian Block, Jonathan Hook
Unlike traditional physical sports, Esport games are played using wholly digital platforms. As a consequence, there exists rich data (in-game, audio and video) about the events that take place in matches. These data offer viable linguistic resources for generating comprehensible text descriptions of matches, which could, be used as the basis of novel text-based spectator experiences. We present a study that investigates if users perceive text generated by the NLG system as an accurate recap of highlight moments. We also explore how the text generated supported viewer understanding of highlight moments in two scenarios: i) text as an alternative way to spectate a match, instead of viewing the main broadcast; and ii) text as an additional information resource to be consumed while viewing the main broadcast. Our study provided insights on the implications of the presentation strategies for use of text in recapping highlight moments to Dota 2 spectators.
{"title":"Automatic Generation of Text for Match Recaps using Esport Caster Commentaries","authors":"Oluseyi Olarewaju, A. Kokkinakis, Simon Demediuk, Justus Roberstson, Isabelle Nölle, Sagarika Patra, Daniel Slawson, A. Chitayat, Alistair Coates, B. Kirman, Anders Drachen, M. Ursu, Florian Block, Jonathan Hook","doi":"10.5121/csit.2020.101810","DOIUrl":"https://doi.org/10.5121/csit.2020.101810","url":null,"abstract":"Unlike traditional physical sports, Esport games are played using wholly digital platforms. As a consequence, there exists rich data (in-game, audio and video) about the events that take place in matches. These data offer viable linguistic resources for generating comprehensible text descriptions of matches, which could, be used as the basis of novel text-based spectator experiences. We present a study that investigates if users perceive text generated by the NLG system as an accurate recap of highlight moments. We also explore how the text generated supported viewer understanding of highlight moments in two scenarios: i) text as an alternative way to spectate a match, instead of viewing the main broadcast; and ii) text as an additional information resource to be consumed while viewing the main broadcast. Our study provided insights on the implications of the presentation strategies for use of text in recapping highlight moments to Dota 2 spectators.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42830522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-18DOI: 10.5121/csit.2020.101812
Ali M. Alagrami, Maged M. Eljazzar
Tajweed is a set of rules to read the Quran in a correct Pronunciation of the letters with all its Qualities, while Reciting the Quran. which means you have to give every letter in the Quran its due of characteristics and apply it to this particular letter in this specific situation while reading, which may differ in other times. These characteristics include melodic rules, like where to stop and for how long, when to merge two letters in pronunciation or when to stretch some, or even when to put more strength on some letters over other. Most of the papers focus mainly on the main recitation rules and the pronunciation but not (Ahkam AL Tajweed) which give different rhythm and different melody to the pronunciation with every different rule of (Tajweed). Which is also considered very important and essential in Reading the Quran as it can give different meanings to the words. In this paper we discuss in detail full system for automatic recognition of Quran Recitation Rules (Tajweed) by using support vector machine and threshold scoring system.
Tajweed是一套诵读《古兰经》的规则,在诵读《古兰经》时,要用正确的发音读出所有的字母。这意味着你必须给《古兰经》里的每一个字母赋予它应有的特征,并在阅读的时候把它应用到这个特定的字母上,在其他时候可能会有所不同。这些特征包括旋律规则,比如停在哪里,停多长时间,什么时候在发音中合并两个字母,什么时候拉伸一些字母,甚至什么时候在一些字母上放更多的力量。大多数论文主要关注的是主要的背诵规则和读音,而不是(Ahkam AL Tajweed),每一个不同的(Tajweed)规则给发音带来不同的节奏和不同的旋律。这在阅读《古兰经》时也被认为是非常重要和必不可少的,因为它可以赋予单词不同的含义。本文详细讨论了基于支持向量机和阈值评分系统的《古兰经》诵读规则自动识别系统。
{"title":"SMARTAJWEED Automatic Recognition of Arabic Quranic Recitation Rules","authors":"Ali M. Alagrami, Maged M. Eljazzar","doi":"10.5121/csit.2020.101812","DOIUrl":"https://doi.org/10.5121/csit.2020.101812","url":null,"abstract":"Tajweed is a set of rules to read the Quran in a correct Pronunciation of the letters with all its Qualities, while Reciting the Quran. which means you have to give every letter in the Quran its due of characteristics and apply it to this particular letter in this specific situation while reading, which may differ in other times. These characteristics include melodic rules, like where to stop and for how long, when to merge two letters in pronunciation or when to stretch some, or even when to put more strength on some letters over other. Most of the papers focus mainly on the main recitation rules and the pronunciation but not (Ahkam AL Tajweed) which give different rhythm and different melody to the pronunciation with every different rule of (Tajweed). Which is also considered very important and essential in Reading the Quran as it can give different meanings to the words. In this paper we discuss in detail full system for automatic recognition of Quran Recitation Rules (Tajweed) by using support vector machine and threshold scoring system.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47588480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-18DOI: 10.5121/csit.2020.101808
Hager Ali Yahia, Mohammed Zakaria Moustafa, Mohammed Rizk Mohammed, H. Khater
A support vector machine (SVM) learns the decision surface from two different classes of the input points. In many applications, there are misclassifications in some of the input points and each is not fully assigned to one of these two classes. In this paper a bi-objective quadratic programming model with fuzzy parameters is utilized and different feature quality measures are optimized simultaneously. An α-cut is defined to transform the fuzzy model to a family of classical bi-objective quadratic programming problems. The weighting method is used to optimize each of these problems. An important contribution will be added for the proposed fuzzy bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The experimental results show the effectiveness of the α-cut with the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.
{"title":"A Fuzzy BI-Objective Model for SVM with an Interactive Procedure to Identify the Best Compromise Solution","authors":"Hager Ali Yahia, Mohammed Zakaria Moustafa, Mohammed Rizk Mohammed, H. Khater","doi":"10.5121/csit.2020.101808","DOIUrl":"https://doi.org/10.5121/csit.2020.101808","url":null,"abstract":"A support vector machine (SVM) learns the decision surface from two different classes of the input points. In many applications, there are misclassifications in some of the input points and each is not fully assigned to one of these two classes. In this paper a bi-objective quadratic programming model with fuzzy parameters is utilized and different feature quality measures are optimized simultaneously. An α-cut is defined to transform the fuzzy model to a family of classical bi-objective quadratic programming problems. The weighting method is used to optimize each of these problems. An important contribution will be added for the proposed fuzzy bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The experimental results show the effectiveness of the α-cut with the weighting parameters on reducing the misclassification between two classes of the input points. An interactive procedure will be added to identify the best compromise solution from the generated efficient solutions.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41826272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}