Pub Date : 2017-11-01DOI: 10.1109/ISKE.2017.8258755
Faisal Khurshid, Yan Zhu, Chubato Wondaferaw Yohannese, M. Iqbal
Online purchasing became an integral part of our lives in this digital era where E-commerce websites allow people to buy as well as share their experiences about products or services in the form of reviews. Customers as well as companies use these reviews for decision making. This facility helps people to derive their buying decisions whereas malicious users use this as their tool to promote or demote products or services intentionally. This phenomenon is called review spam. Review spam detection is the classification of reviews into malign or benign. Therefore, our aim is to evaluate performance of supervised machine learning algorithms for review spam detection based on different feature sets extracted from real life dataset instead of Amazon Mechanical Turkers (AMT) tailored dataset. We study various factors including Recall, Precision, and Receiver Operating Characteristic (ROC) through experimentation. AdaBoost outperforms all others with 0.83 precision and has correctly identified all spams whereas misclassified minuscule number of normal reviews.
{"title":"Recital of supervised learning on review spam detection: An empirical analysis","authors":"Faisal Khurshid, Yan Zhu, Chubato Wondaferaw Yohannese, M. Iqbal","doi":"10.1109/ISKE.2017.8258755","DOIUrl":"https://doi.org/10.1109/ISKE.2017.8258755","url":null,"abstract":"Online purchasing became an integral part of our lives in this digital era where E-commerce websites allow people to buy as well as share their experiences about products or services in the form of reviews. Customers as well as companies use these reviews for decision making. This facility helps people to derive their buying decisions whereas malicious users use this as their tool to promote or demote products or services intentionally. This phenomenon is called review spam. Review spam detection is the classification of reviews into malign or benign. Therefore, our aim is to evaluate performance of supervised machine learning algorithms for review spam detection based on different feature sets extracted from real life dataset instead of Amazon Mechanical Turkers (AMT) tailored dataset. We study various factors including Recall, Precision, and Receiver Operating Characteristic (ROC) through experimentation. AdaBoost outperforms all others with 0.83 precision and has correctly identified all spams whereas misclassified minuscule number of normal reviews.","PeriodicalId":208009,"journal":{"name":"2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131838964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ISKE.2017.8258818
Zhicai Liu, Bo Li, Zheng Pei, K. Qin
The study of concept lattices, property oriented concept lattices and object oriented concept lattices provides complementary conceptual structures, which can be used to search, analyze and extract information from large data sets. In the view of multi-granular computing, this paper proposes a novel approach for formal concept analysis via multi-granulation attributes. Based on a formal context and an equivalence relation on its attribute set, we construct a multigranulation formal context. The related attribute oriented concept lattices are investigated. Some important properties are examined. We also provide methods to generate attribute oriented concept lattices associated with a multi-granulation formal context. This study can be used to analyze concepts and sub-concepts on attributes, to mine multi-level association rules and cross-level association rules originated from practice application, such as in our real-life market basket analysis.
{"title":"Formal concept analysis via multi-granulation attributes","authors":"Zhicai Liu, Bo Li, Zheng Pei, K. Qin","doi":"10.1109/ISKE.2017.8258818","DOIUrl":"https://doi.org/10.1109/ISKE.2017.8258818","url":null,"abstract":"The study of concept lattices, property oriented concept lattices and object oriented concept lattices provides complementary conceptual structures, which can be used to search, analyze and extract information from large data sets. In the view of multi-granular computing, this paper proposes a novel approach for formal concept analysis via multi-granulation attributes. Based on a formal context and an equivalence relation on its attribute set, we construct a multigranulation formal context. The related attribute oriented concept lattices are investigated. Some important properties are examined. We also provide methods to generate attribute oriented concept lattices associated with a multi-granulation formal context. This study can be used to analyze concepts and sub-concepts on attributes, to mine multi-level association rules and cross-level association rules originated from practice application, such as in our real-life market basket analysis.","PeriodicalId":208009,"journal":{"name":"2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134089138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ISKE.2017.8258839
Dongwei Li, Linfeng Liu, Daoliang Chen, Jing Wen
With the increasing concern over marine applications in recent years, the technology of underwater wireless sensor networks (UWSNs) has received considerable attention. In UWSNs, the gathered data is sent to terrestrial control center through multi-hops for further processing. UWSNs usually consists of three types of nodes: ordinary nodes, anchor nodes, and sink nodes. The data messages are transferred from an ordinary node or an anchored node to one of the sink nodes by discrete hops. Thus, we propose a Data Forwarding Algorithm based on estimated Hungarian method (DFAH) to improve delivery ratio and reduce transmission delay. The estimated Hungarian method is applied to solve the assignment problem in data forwarding process, where the anchor nodes receive the forwarding requests from ordinary nodes and optimize the waiting queue. Both analysis and simulation results indicate that DFAH has advantages in delivery success rate and transmission delay.
{"title":"A data forwarding algorithm based on estimated Hungarian method for underwater sensor networks","authors":"Dongwei Li, Linfeng Liu, Daoliang Chen, Jing Wen","doi":"10.1109/ISKE.2017.8258839","DOIUrl":"https://doi.org/10.1109/ISKE.2017.8258839","url":null,"abstract":"With the increasing concern over marine applications in recent years, the technology of underwater wireless sensor networks (UWSNs) has received considerable attention. In UWSNs, the gathered data is sent to terrestrial control center through multi-hops for further processing. UWSNs usually consists of three types of nodes: ordinary nodes, anchor nodes, and sink nodes. The data messages are transferred from an ordinary node or an anchored node to one of the sink nodes by discrete hops. Thus, we propose a Data Forwarding Algorithm based on estimated Hungarian method (DFAH) to improve delivery ratio and reduce transmission delay. The estimated Hungarian method is applied to solve the assignment problem in data forwarding process, where the anchor nodes receive the forwarding requests from ordinary nodes and optimize the waiting queue. Both analysis and simulation results indicate that DFAH has advantages in delivery success rate and transmission delay.","PeriodicalId":208009,"journal":{"name":"2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114092822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ISKE.2017.8258782
Feng Cao, Yang Xu, Jian Zhong, Guanfeng Wu
Nowadays, famous and powerful first-order logic automated theorem proving almost use saturation which called given-clause algorithm as the deductive framework. The given-clause algorithm is a divisional framework which divides the entire clauses into selected clause set and deductive clause set. It is efficient with large heuristic strategies and inference calculus. In this paper, we present a holistic deductive framework based on standard contradiction separation. The framework allows arbitrary clause to take part in deduction in the holistic clause set as many as possible which can take full advantage of synergetic effect between clauses. It is a multi-ary, dynamic, sound, complete deduction which can generate more new unit clauses as the goal. We implement the preliminary version of prover, it can find proofs in fewer inferential steps for some Mizar and TPTP problems under effective strategies. Related definitions and useful methods are proposed for programming search paths, avoiding repetition, simplifying clauses and the key strategies, they are contribute to finding proof more efficiently. Performance analysis and some conclusions are outlined at the end of this paper.
{"title":"Holistic deductive framework theorem proving based on standard contradiction separation for first-order logic","authors":"Feng Cao, Yang Xu, Jian Zhong, Guanfeng Wu","doi":"10.1109/ISKE.2017.8258782","DOIUrl":"https://doi.org/10.1109/ISKE.2017.8258782","url":null,"abstract":"Nowadays, famous and powerful first-order logic automated theorem proving almost use saturation which called given-clause algorithm as the deductive framework. The given-clause algorithm is a divisional framework which divides the entire clauses into selected clause set and deductive clause set. It is efficient with large heuristic strategies and inference calculus. In this paper, we present a holistic deductive framework based on standard contradiction separation. The framework allows arbitrary clause to take part in deduction in the holistic clause set as many as possible which can take full advantage of synergetic effect between clauses. It is a multi-ary, dynamic, sound, complete deduction which can generate more new unit clauses as the goal. We implement the preliminary version of prover, it can find proofs in fewer inferential steps for some Mizar and TPTP problems under effective strategies. Related definitions and useful methods are proposed for programming search paths, avoiding repetition, simplifying clauses and the key strategies, they are contribute to finding proof more efficiently. Performance analysis and some conclusions are outlined at the end of this paper.","PeriodicalId":208009,"journal":{"name":"2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124956636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ISKE.2017.8258760
Leilei Chang, Tianjun Liao, Jiang Jiang
In pushing the social and military developments towards its new frontiers and boundaries are the technologies. As a collection of multiple technologies driven by capability/system requirements, the concept of the Technology System of Systems (TSoS) is proposed. This study further investigates the generation, description, modeling and assessment of TSoS. Specifically, the generation of TSoS is based on the hierarchy hypothesis of TSoS. The description of TSoS is based on concept of views with each to describe a part of TSoS. The readiness assessment of TSoS is to assess the comprehensive readiness of a technology group while the satisfaction assessment of TSoS is to obtain a comprehensive and quantitative result on how much a technology (group) can meet the TSoS requirement. A TSoS with 60 technologies is derived from the 2017 US physical year budget estimates and it is further studied to verify the efficiency of the proposed TSoS generation, description, modeling and assessment methodology.
{"title":"A case study of the generation, description, modeling and assessment of technology system of systems","authors":"Leilei Chang, Tianjun Liao, Jiang Jiang","doi":"10.1109/ISKE.2017.8258760","DOIUrl":"https://doi.org/10.1109/ISKE.2017.8258760","url":null,"abstract":"In pushing the social and military developments towards its new frontiers and boundaries are the technologies. As a collection of multiple technologies driven by capability/system requirements, the concept of the Technology System of Systems (TSoS) is proposed. This study further investigates the generation, description, modeling and assessment of TSoS. Specifically, the generation of TSoS is based on the hierarchy hypothesis of TSoS. The description of TSoS is based on concept of views with each to describe a part of TSoS. The readiness assessment of TSoS is to assess the comprehensive readiness of a technology group while the satisfaction assessment of TSoS is to obtain a comprehensive and quantitative result on how much a technology (group) can meet the TSoS requirement. A TSoS with 60 technologies is derived from the 2017 US physical year budget estimates and it is further studied to verify the efficiency of the proposed TSoS generation, description, modeling and assessment methodology.","PeriodicalId":208009,"journal":{"name":"2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121075591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ISKE.2017.8258762
Xiongtao Zhang, Xingguang Pan, Shitong Wang
Although Deep Belief Network (DBN) has been applied to a wide range of practical scenarios, i.e. image classification, signal recognition, remaining useful life estimation, on account of its powerful high classification accuracy, but it has impossible interpretation of functionality (it is desirable to have a high level of interpretability for users also). In this paper, we propose a novel fuzzy DBN system called TSK_DBN which combines DBN and TSK fuzzy system. Firstly, the fuzzy clustering algorithm FCM is used to divide the input space, and the membership function of the fuzzy rule is defined. Then, the implicit feature is created by DBN. Finally, the consequent parameters of the fuzzy rule are determined by LLM(Least Learning Machine). The TSK_DBN fuzzy system has an adaptive mechanism, which can automatically adjust the depth until the optimal accuracy is achieved. The prominent character of the TSK_DBN system is that there is adaptive mechanism to regulate the depth of DBN to get a high accuracy. Several benchmark datasets have been used to empirically evaluate the efficiency of the proposed TSK_DBN in handling pattern classification tasks. The results show that the accuracy rates of TSK_DBN are at least comparable (if not superior) to DBN system with distinctive ability in providing explicit knowledge in the form of high interpretable rule base.
{"title":"Fuzzy DBN with rule-based knowledge representation and high interpretability","authors":"Xiongtao Zhang, Xingguang Pan, Shitong Wang","doi":"10.1109/ISKE.2017.8258762","DOIUrl":"https://doi.org/10.1109/ISKE.2017.8258762","url":null,"abstract":"Although Deep Belief Network (DBN) has been applied to a wide range of practical scenarios, i.e. image classification, signal recognition, remaining useful life estimation, on account of its powerful high classification accuracy, but it has impossible interpretation of functionality (it is desirable to have a high level of interpretability for users also). In this paper, we propose a novel fuzzy DBN system called TSK_DBN which combines DBN and TSK fuzzy system. Firstly, the fuzzy clustering algorithm FCM is used to divide the input space, and the membership function of the fuzzy rule is defined. Then, the implicit feature is created by DBN. Finally, the consequent parameters of the fuzzy rule are determined by LLM(Least Learning Machine). The TSK_DBN fuzzy system has an adaptive mechanism, which can automatically adjust the depth until the optimal accuracy is achieved. The prominent character of the TSK_DBN system is that there is adaptive mechanism to regulate the depth of DBN to get a high accuracy. Several benchmark datasets have been used to empirically evaluate the efficiency of the proposed TSK_DBN in handling pattern classification tasks. The results show that the accuracy rates of TSK_DBN are at least comparable (if not superior) to DBN system with distinctive ability in providing explicit knowledge in the form of high interpretable rule base.","PeriodicalId":208009,"journal":{"name":"2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115389430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ISKE.2017.8258720
Yangyang Hu, W. Cheng
WebSocket is a TCP-based protocol, providing full-duplex communication channels over a single TCP connection. In recent years, more and more developers choose to use it to develop applications. This paper discusses the traditional real-time Web communication schemes, and analyzes the advantages of WebSocket. Based on the deep study of WebSocket, a real-time campus information push solution based on WebSocket and node.js is proposed. This paper expatiates the overall structure of the scheme, designs and realizes the functions of each module, and the functional testing results show that the scheme is feasible.
{"title":"Research and implementation of campus information push system based on WebSocket","authors":"Yangyang Hu, W. Cheng","doi":"10.1109/ISKE.2017.8258720","DOIUrl":"https://doi.org/10.1109/ISKE.2017.8258720","url":null,"abstract":"WebSocket is a TCP-based protocol, providing full-duplex communication channels over a single TCP connection. In recent years, more and more developers choose to use it to develop applications. This paper discusses the traditional real-time Web communication schemes, and analyzes the advantages of WebSocket. Based on the deep study of WebSocket, a real-time campus information push solution based on WebSocket and node.js is proposed. This paper expatiates the overall structure of the scheme, designs and realizes the functions of each module, and the functional testing results show that the scheme is feasible.","PeriodicalId":208009,"journal":{"name":"2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116782493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ISKE.2017.8258816
Binbin Xue, Lu Wang, K. Qin
This paper focuses on triple I inference method based on interval-valued fuzzy soft sets. Computational formulas for both interval-valued fuzzy soft modus ponens (IVF-SMP) and interval-valued fuzzy soft modus tollens (IVFSMT) with respect to left-continuous interval-valued t-norms and its residual interval-valued implication are presented. Besides, the reversibility property of triple I methods of IVFSMP and IVFSMT are analyzed.
{"title":"An interval-valued fuzzy soft set based triple I method","authors":"Binbin Xue, Lu Wang, K. Qin","doi":"10.1109/ISKE.2017.8258816","DOIUrl":"https://doi.org/10.1109/ISKE.2017.8258816","url":null,"abstract":"This paper focuses on triple I inference method based on interval-valued fuzzy soft sets. Computational formulas for both interval-valued fuzzy soft modus ponens (IVF-SMP) and interval-valued fuzzy soft modus tollens (IVFSMT) with respect to left-continuous interval-valued t-norms and its residual interval-valued implication are presented. Besides, the reversibility property of triple I methods of IVFSMP and IVFSMT are analyzed.","PeriodicalId":208009,"journal":{"name":"2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113960576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ISKE.2017.8258812
Ity Kaul, É. Martin, V. Puri
Trend detection in financial temporal data is a significant problem, with far-reaching applications, that presents researchers with many challenges. Existing techniques require users to choose a given interval, and then provide an approximation of the data on that interval; they always produce some approximation, namely, a member of a class of candidate functions that is "best" according to some criteria. Moreover, financial analysis can be performed from different perspectives, at different levels, from short term to long term; it is therefore very desirable to be able to indicate a scale that is suitable and adapted to the analysis of interest. Based on these considerations, our objective was to design a method that lets users input a scale factor, determines the intervals on which an approximation captures a significant trend as a function of the scale factor, and proposes a qualification of the trend. The method we use combines various machine-learning and statistical techniques, a key role being played by a change-point detection method. We describe the architecture of a system that implements the proposed method. Finally, we report on the experiments we ran and use their results to stress how they differ from the results than can be obtained from alternative approaches.
{"title":"A model for the detection of underlying trends in temporal data","authors":"Ity Kaul, É. Martin, V. Puri","doi":"10.1109/ISKE.2017.8258812","DOIUrl":"https://doi.org/10.1109/ISKE.2017.8258812","url":null,"abstract":"Trend detection in financial temporal data is a significant problem, with far-reaching applications, that presents researchers with many challenges. Existing techniques require users to choose a given interval, and then provide an approximation of the data on that interval; they always produce some approximation, namely, a member of a class of candidate functions that is \"best\" according to some criteria. Moreover, financial analysis can be performed from different perspectives, at different levels, from short term to long term; it is therefore very desirable to be able to indicate a scale that is suitable and adapted to the analysis of interest. Based on these considerations, our objective was to design a method that lets users input a scale factor, determines the intervals on which an approximation captures a significant trend as a function of the scale factor, and proposes a qualification of the trend. The method we use combines various machine-learning and statistical techniques, a key role being played by a change-point detection method. We describe the architecture of a system that implements the proposed method. Finally, we report on the experiments we ran and use their results to stress how they differ from the results than can be obtained from alternative approaches.","PeriodicalId":208009,"journal":{"name":"2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128465793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ISKE.2017.8258740
Á. L. Romero, Luis Martínez-López, Rosa M. Rodríguez
Consensus reaching processes (CRPs) in Group Decision Making (GDM) try to reach a mutual agreement among a group of decision makers before making a decision. To facilitate CRPs, multiple consensus models have been proposed in the literature. Classically, just a few decision makers participated in the CRP, however nowadays, the appearance of new technological environments and paradigms to make group decisions demand the management of larger scale problems that add new requirements to the solution of consensus. This contribution presents a study of a classical CRP applied to large-scale GDM in order to analyze its performance and detect which are the main challenges that these processes face in large-scale GDM. The analysis will be carried out in a java-based framework, AFRYCA 2.0, simulating different scenarios in large scale GDM.
{"title":"Can classical consensus models deal with large scale group decision making?","authors":"Á. L. Romero, Luis Martínez-López, Rosa M. Rodríguez","doi":"10.1109/ISKE.2017.8258740","DOIUrl":"https://doi.org/10.1109/ISKE.2017.8258740","url":null,"abstract":"Consensus reaching processes (CRPs) in Group Decision Making (GDM) try to reach a mutual agreement among a group of decision makers before making a decision. To facilitate CRPs, multiple consensus models have been proposed in the literature. Classically, just a few decision makers participated in the CRP, however nowadays, the appearance of new technological environments and paradigms to make group decisions demand the management of larger scale problems that add new requirements to the solution of consensus. This contribution presents a study of a classical CRP applied to large-scale GDM in order to analyze its performance and detect which are the main challenges that these processes face in large-scale GDM. The analysis will be carried out in a java-based framework, AFRYCA 2.0, simulating different scenarios in large scale GDM.","PeriodicalId":208009,"journal":{"name":"2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130691688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}