Novel networking technologies such as massive Internet-of-Things and 6G-and-beyond cellular networks are based on ultra-dense wireless communications. A wireless communication channel is a shared medium that demands access control, such as proper transmission scheduling. The SINR model can improve the performance of ultra-dense wireless networks by taking into consideration the effects of interference to allow multiple simultaneous transmissions in the same coverage area and using the same frequency band. However, scheduling in wireless networks under the SINR model is an NP-hard problem. This work presents a bioinspired solution based on a genetic heuristic to solve that problem. The proposed solution, called Genetic-based Transmission Scheduler (GeTS) produces a complete transmission schedule optimizing size, increasing the number of simultaneous transmissions (i.e., spatial reuse) thus allowing devices to communicate as soon as possible. Simulation results are presented for GeTS, including a convergence test and comparisons with other alternatives. Results confirm the ability of the solution to produce near-optimal schedules.
{"title":"A genetic scheduling strategy with spatial reuse for dense wireless networks","authors":"Vinicius Fulber-Garcia, F. Engel, E. P. Duarte","doi":"10.3233/his-230015","DOIUrl":"https://doi.org/10.3233/his-230015","url":null,"abstract":"Novel networking technologies such as massive Internet-of-Things and 6G-and-beyond cellular networks are based on ultra-dense wireless communications. A wireless communication channel is a shared medium that demands access control, such as proper transmission scheduling. The SINR model can improve the performance of ultra-dense wireless networks by taking into consideration the effects of interference to allow multiple simultaneous transmissions in the same coverage area and using the same frequency band. However, scheduling in wireless networks under the SINR model is an NP-hard problem. This work presents a bioinspired solution based on a genetic heuristic to solve that problem. The proposed solution, called Genetic-based Transmission Scheduler (GeTS) produces a complete transmission schedule optimizing size, increasing the number of simultaneous transmissions (i.e., spatial reuse) thus allowing devices to communicate as soon as possible. Simulation results are presented for GeTS, including a convergence test and comparisons with other alternatives. Results confirm the ability of the solution to produce near-optimal schedules.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69908630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kazushi Fujino, Takeru Aoki, K. Takadama, Hiroyuki Sato
The cortical learning algorithm (CLA) is a time series prediction algorithm. Memory elements called columns and cells discretely represent data with their state combinations, whereas linking elements called synapses change their state combinations. For tasks requiring to take actions, the action-prediction CLA (ACLA) has an advantage to complement missing state values with their predictions. However, an increase in the number of missing state values (i) generates excess synapses negatively affect the action predictions and (ii) decreases the stability of data representation and makes the output of action values difficult. This paper proposes an adaptive ACLA using (i) adaptive synapse adjustment and (ii) adaptive action-separated decoding in an uncertain environment, missing multiple input state values probabilistically. (i) The proposed adaptive synapse adjustment suppresses unnecessary synapses. (ii) The proposed adaptive action-separated decoding adaptively outputs an action prediction separately for each action value. Experimental results using uncertain two- and three-dimensional mountain car tasks show that the proposed adaptive ACLA achieves a more robust action prediction performance than the conventional ACLA, DDPG, and the three LSTM-assisted reinforcement learning algorithms of DDPG, TD3, and SAC, even though the number of missing state values and their frequencies increase. These results implicate that the proposed adaptive ACLA is a way to making decisions for the future, even in cases where information surrounding the situation partially lacked.
{"title":"Adaptive action-prediction cortical learning algorithm under uncertain environments","authors":"Kazushi Fujino, Takeru Aoki, K. Takadama, Hiroyuki Sato","doi":"10.3233/his-230013","DOIUrl":"https://doi.org/10.3233/his-230013","url":null,"abstract":"The cortical learning algorithm (CLA) is a time series prediction algorithm. Memory elements called columns and cells discretely represent data with their state combinations, whereas linking elements called synapses change their state combinations. For tasks requiring to take actions, the action-prediction CLA (ACLA) has an advantage to complement missing state values with their predictions. However, an increase in the number of missing state values (i) generates excess synapses negatively affect the action predictions and (ii) decreases the stability of data representation and makes the output of action values difficult. This paper proposes an adaptive ACLA using (i) adaptive synapse adjustment and (ii) adaptive action-separated decoding in an uncertain environment, missing multiple input state values probabilistically. (i) The proposed adaptive synapse adjustment suppresses unnecessary synapses. (ii) The proposed adaptive action-separated decoding adaptively outputs an action prediction separately for each action value. Experimental results using uncertain two- and three-dimensional mountain car tasks show that the proposed adaptive ACLA achieves a more robust action prediction performance than the conventional ACLA, DDPG, and the three LSTM-assisted reinforcement learning algorithms of DDPG, TD3, and SAC, even though the number of missing state values and their frequencies increase. These results implicate that the proposed adaptive ACLA is a way to making decisions for the future, even in cases where information surrounding the situation partially lacked.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48562858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The combination of Quality of Thing (QoT) with Internet of Things (IoT) systems can be challenging because of the vast number of connected devices, diverse types of applications and services, and varying network conditions. During the process of composing these Things, heterogeneity arises as an uncertainty. Hence, uncertainty and imprecision emerge as a consequence of the plethora of things as well as the variety of the composition paths. One way to address these challenges is through the use of fuzzy logic to mimic uncertainty and imprecision modeling and genetic algorithm to find the optimal path. As a result, we propose a model for the Thing behaviour based on QoT non-functional properties. As well as we propose a hybrid approach for modeling the uncertainty of the configurable composition based on fuzzy logic and genetic algorithm. Our approach helps to ensure that IoT applications and services receive the resources they need to function effectively, even in the presence of varying network conditions and changing demands.
{"title":"A hybrid approach: Uncertain configurable QoT-IoT composition based on fuzzy logic and genetic algorithm","authors":"Soura Boulaares, S. Sassi, D. Benslimane, S. Faiz","doi":"10.3233/his-230014","DOIUrl":"https://doi.org/10.3233/his-230014","url":null,"abstract":"The combination of Quality of Thing (QoT) with Internet of Things (IoT) systems can be challenging because of the vast number of connected devices, diverse types of applications and services, and varying network conditions. During the process of composing these Things, heterogeneity arises as an uncertainty. Hence, uncertainty and imprecision emerge as a consequence of the plethora of things as well as the variety of the composition paths. One way to address these challenges is through the use of fuzzy logic to mimic uncertainty and imprecision modeling and genetic algorithm to find the optimal path. As a result, we propose a model for the Thing behaviour based on QoT non-functional properties. As well as we propose a hybrid approach for modeling the uncertainty of the configurable composition based on fuzzy logic and genetic algorithm. Our approach helps to ensure that IoT applications and services receive the resources they need to function effectively, even in the presence of varying network conditions and changing demands.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48179234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Embryonic architecture that carries self-evolving design with fault tolerant feature is proposed for deep space missions. Fault tolerance is achieved in the embryonic architecture due to its homogeneous structure. The cloning of configuration data or genome data to all the embryonic cells makes each cell capable of selecting required cell function using selective gene. The primary digital circuits of avionics are implemented on the fabric, where the configuration data in Cartesian Genetic Programming (CGP) format is evolved through customized GA. The CGP format is preferred over LUT format for the circuit configuration data due to its fixed data size in case of modular design. Further the CGP format enables fault detection at embryonic cell level as well as logic gate level. The various combinational and sequential circuits like adder, comparator, multiplier, register and counter are designed and implemented on embryonic fabric using Verilog. The circuit performance is evaluated using simulation. The proposed PHsClone genetic algorithm (GA) design with parallel-pipeline approach is to achieve faster convergence. Four concurrent PHsClone GA executions (four parallel threads) achieve convergence for the 10 times faster for a 1-bit adder, and 3 times faster for a 2-bit comparator.
{"title":"GA evolved CGP configuration data for digital circuit design on embryonic architecture","authors":"Gayatri Malhotra, P. Duraiswamy","doi":"10.3233/his-230012","DOIUrl":"https://doi.org/10.3233/his-230012","url":null,"abstract":"Embryonic architecture that carries self-evolving design with fault tolerant feature is proposed for deep space missions. Fault tolerance is achieved in the embryonic architecture due to its homogeneous structure. The cloning of configuration data or genome data to all the embryonic cells makes each cell capable of selecting required cell function using selective gene. The primary digital circuits of avionics are implemented on the fabric, where the configuration data in Cartesian Genetic Programming (CGP) format is evolved through customized GA. The CGP format is preferred over LUT format for the circuit configuration data due to its fixed data size in case of modular design. Further the CGP format enables fault detection at embryonic cell level as well as logic gate level. The various combinational and sequential circuits like adder, comparator, multiplier, register and counter are designed and implemented on embryonic fabric using Verilog. The circuit performance is evaluated using simulation. The proposed PHsClone genetic algorithm (GA) design with parallel-pipeline approach is to achieve faster convergence. Four concurrent PHsClone GA executions (four parallel threads) achieve convergence for the 10 times faster for a 1-bit adder, and 3 times faster for a 2-bit comparator.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45686686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the amount of information exceeds the management and storage capacity of traditional data management systems, several domains need to take into account this growth of data, in particular the decision-making domain known as Business Intelligence (BI). Since the accumulation and reuse of these massive data stands for a gold mine for businesses, several insights that are useful and essential for effective decision making have to be provided. However, it is obvious that there are several problems and challenges for the BI systems, especially at the level of the ETL (Extraction-Transformation-Loading) as an integration system. These processes are responsible for the selection, filtering and restructuring of data sources in order to obtain relevant decisions. In this research paper, our central focus is especially upon the adaptation of the extraction phase inspired from the first step of MapReduce paradigm in order to prepare the massive data to the transformation phase. Subsequently, we provide a conceptual model of the extraction phase which is composed of a conversion operation that guarantees obtaining NoSQL structure suitable for Big Data storage, and a vertical partitioning operation for presenting the storage mode before submitting data to the second ETL phase. Finally, we implement through Talend for Big Data our new component which helps the designer extract data from semi-structured data.
由于信息量超过了传统数据管理系统的管理和存储能力,有几个领域需要考虑数据的增长,特别是被称为商业智能(BI)的决策领域。由于这些海量数据的积累和重复使用代表着企业的金矿,因此必须提供一些对有效决策有用且至关重要的见解。然而,很明显,BI系统存在一些问题和挑战,尤其是在ETL(提取转换加载)作为集成系统的层面上。这些过程负责数据源的选择、过滤和重组,以获得相关决策。在这篇研究论文中,我们的中心关注点特别是受MapReduce范式第一步启发的提取阶段的适应性,以便为转换阶段准备大量数据。随后,我们提供了提取阶段的概念模型,该模型由保证获得适合大数据存储的NoSQL结构的转换操作和在向第二ETL阶段提交数据之前呈现存储模式的垂直分区操作组成。最后,我们通过Talend for Big Data实现了我们的新组件,它可以帮助设计者从半结构化数据中提取数据。
{"title":"Conceptual modeling of Big Data extraction phase","authors":"Hana Mallek, Faïza Ghozzi, F. Gargouri","doi":"10.3233/his-230008","DOIUrl":"https://doi.org/10.3233/his-230008","url":null,"abstract":"As the amount of information exceeds the management and storage capacity of traditional data management systems, several domains need to take into account this growth of data, in particular the decision-making domain known as Business Intelligence (BI). Since the accumulation and reuse of these massive data stands for a gold mine for businesses, several insights that are useful and essential for effective decision making have to be provided. However, it is obvious that there are several problems and challenges for the BI systems, especially at the level of the ETL (Extraction-Transformation-Loading) as an integration system. These processes are responsible for the selection, filtering and restructuring of data sources in order to obtain relevant decisions. In this research paper, our central focus is especially upon the adaptation of the extraction phase inspired from the first step of MapReduce paradigm in order to prepare the massive data to the transformation phase. Subsequently, we provide a conceptual model of the extraction phase which is composed of a conversion operation that guarantees obtaining NoSQL structure suitable for Big Data storage, and a vertical partitioning operation for presenting the storage mode before submitting data to the second ETL phase. Finally, we implement through Talend for Big Data our new component which helps the designer extract data from semi-structured data.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41582986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Task scheduling is crucial for achieving high performance in parallel computing. Since task scheduling is NP-hard, the efficient assignment of tasks to compute resources remains an issue. Across the literature, several algorithms have been proposed to solve different scheduling problems. One group of promising approaches in this field is formed by swarm-based algorithms which have a potential to benefit from a parallel execution. Common swarm-based algorithms are Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO). In this article, we propose two new scheduling methods based on parallel ACO, PSO and, Hill Climbing, respectively. These algorithms are used to solve the problem of scheduling independent tasks onto heterogeneous multicore platforms. The results of performance measuements demonstrate the improvements on the makespan and the scheduling time achieved by the parallel variants.
{"title":"Parallel swarm-based algorithms for scheduling independent tasks","authors":"Robert Dietze, Maximilian Kränert","doi":"10.3233/his-230006","DOIUrl":"https://doi.org/10.3233/his-230006","url":null,"abstract":"Task scheduling is crucial for achieving high performance in parallel computing. Since task scheduling is NP-hard, the efficient assignment of tasks to compute resources remains an issue. Across the literature, several algorithms have been proposed to solve different scheduling problems. One group of promising approaches in this field is formed by swarm-based algorithms which have a potential to benefit from a parallel execution. Common swarm-based algorithms are Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO). In this article, we propose two new scheduling methods based on parallel ACO, PSO and, Hill Climbing, respectively. These algorithms are used to solve the problem of scheduling independent tasks onto heterogeneous multicore platforms. The results of performance measuements demonstrate the improvements on the makespan and the scheduling time achieved by the parallel variants.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"9 1","pages":"79-93"},"PeriodicalIF":0.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84295633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anis Mezghani, R. Maalej, M. Elleuch, M. Kherallah
Handwritten text recognition remains a popular area of research. An analysis of these techniques is more necessary. This article is practically interested in a bibliographic study on existing recognition systems with the aim of motivating researchers to look into these techniques and try to develop more advanced ones. It presents a detailed comparative study carried out on some Arabic handwritten character recognition techniques using holistic, analytical and a segmentation-free approaches. In this study, first, we show the difference between different recognition approaches: deep learning vs machine learning. Secondly, a description of the Arabic handwriting recognition process regrouping pre-processing, feature extraction and segmentation was presented. Then, we illustrate the main techniques used in the field of handwriting recognition and we make a synthesis of these methods.
{"title":"Recent advances of ML and DL approaches for Arabic handwriting recognition: A review","authors":"Anis Mezghani, R. Maalej, M. Elleuch, M. Kherallah","doi":"10.3233/his-230005","DOIUrl":"https://doi.org/10.3233/his-230005","url":null,"abstract":"Handwritten text recognition remains a popular area of research. An analysis of these techniques is more necessary. This article is practically interested in a bibliographic study on existing recognition systems with the aim of motivating researchers to look into these techniques and try to develop more advanced ones. It presents a detailed comparative study carried out on some Arabic handwritten character recognition techniques using holistic, analytical and a segmentation-free approaches. In this study, first, we show the difference between different recognition approaches: deep learning vs machine learning. Secondly, a description of the Arabic handwriting recognition process regrouping pre-processing, feature extraction and segmentation was presented. Then, we illustrate the main techniques used in the field of handwriting recognition and we make a synthesis of these methods.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":"137 1","pages":"61-78"},"PeriodicalIF":0.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75707742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research work is to discover the rapid requirement of Artificial Intelligence and Statistics in medical research. Objective is to design a diagnostic prediction system that can detect and predict diseases at an early stage from clinical data sets. Some of major diseases leading reasons of death globally are heart disease and cancer. There are different kinds of cancer, in this study we focused on breast cancer and heart disease. Prediction of these diseases at a very early stage is curable and preventive diagnosis can control death rate. Designed two Artificial Intelligence systems for prediction of above-mentioned diseases using statistics and Deep neural networks (i) Combinatorial Learning (CLSDnn) and (ii) an optimized efficient Combinatorial Learning (eCLSDnn). To evaluate the performance of the proposed system conducted experiments on three different data sets, in which two data sets are of breast cancer namely, Wisconsin-data set of UCI Machine Learning repository and AI for Social Good: Women Coders’ Bootcamp data set and Cleveland heart disease data set of UCI Machine Learning repository. The proposed architectures of binary classification are validated for 70%–30% data splitting and on K-fold cross validation. Recognition of Malignant cancerous tumors CLSDnn model achieved maximum accuracy of 98.53% for Wisconsin data set, 95.32% for AI for Social Good: Women Coders’ data set and 96.72% for Cleveland data set. Recognition of Malignant cancerous tumors eCLSDnn model achieved 99.36% for Wisconsin data set, 97.12% for AI for Social Good: Women Coders’ data set and 99.56% for the Cleveland heart disease data set.
研究工作是发现人工智能和统计学在医学研究中的快速需求。目的是设计一个诊断预测系统,能够从临床数据集中早期发现和预测疾病。全球导致死亡的一些主要疾病是心脏病和癌症。有不同种类的癌症,在这项研究中,我们关注的是乳腺癌和心脏病。在早期阶段预测这些疾病是可治愈的,预防性诊断可以控制死亡率。利用统计学和深度神经网络设计了两个预测上述疾病的人工智能系统(i)组合学习(CLSDnn)和(ii)优化高效组合学习(eCLSDnn)。为了评估所提出的系统的性能,在三个不同的数据集上进行了实验,其中两个数据集是乳腺癌,即威斯康星- UCI机器学习存储库的数据集和AI for Social Good: Women Coders ' Bootcamp数据集和UCI机器学习存储库的Cleveland心脏病数据集。所提出的二元分类架构在70%-30%的数据分割和K-fold交叉验证下得到了验证。CLSDnn模型在Wisconsin数据集的最高准确率为98.53%,在AI for Social Good: Women Coders数据集的最高准确率为95.32%,在Cleveland数据集的最高准确率为96.72%。eCLSDnn模型在威斯康星州数据集的识别率为99.36%,在AI for Social Good: Women Coders数据集的识别率为97.12%,在克利夫兰心脏病数据集的识别率为99.56%。
{"title":"An optimized efficient combinatorial learning using deep neural network and statistical techniques","authors":"Jyothi V K, Guda Ramachandra Kaladhara Sarma","doi":"10.3233/his-230007","DOIUrl":"https://doi.org/10.3233/his-230007","url":null,"abstract":"Research work is to discover the rapid requirement of Artificial Intelligence and Statistics in medical research. Objective is to design a diagnostic prediction system that can detect and predict diseases at an early stage from clinical data sets. Some of major diseases leading reasons of death globally are heart disease and cancer. There are different kinds of cancer, in this study we focused on breast cancer and heart disease. Prediction of these diseases at a very early stage is curable and preventive diagnosis can control death rate. Designed two Artificial Intelligence systems for prediction of above-mentioned diseases using statistics and Deep neural networks (i) Combinatorial Learning (CLSDnn) and (ii) an optimized efficient Combinatorial Learning (eCLSDnn). To evaluate the performance of the proposed system conducted experiments on three different data sets, in which two data sets are of breast cancer namely, Wisconsin-data set of UCI Machine Learning repository and AI for Social Good: Women Coders’ Bootcamp data set and Cleveland heart disease data set of UCI Machine Learning repository. The proposed architectures of binary classification are validated for 70%–30% data splitting and on K-fold cross validation. Recognition of Malignant cancerous tumors CLSDnn model achieved maximum accuracy of 98.53% for Wisconsin data set, 95.32% for AI for Social Good: Women Coders’ data set and 96.72% for Cleveland data set. Recognition of Malignant cancerous tumors eCLSDnn model achieved 99.36% for Wisconsin data set, 97.12% for AI for Social Good: Women Coders’ data set and 99.56% for the Cleveland heart disease data set.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47394224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The present paper aims to propose a new learning method based on destructive computing, contrary to the conventional progressive computing or the steady-step learning. In spite of the existence of a large amount of biased or distorted information in inputs, the conventional learning methods fundamentally aim to gradually acquire information that is as faithful as possible to inputs, which has prevented us from acquiring intrinsic information hidden in the deepest level of inputs. At this time, it is permitted to suppose a leap to that level by changing information at hand not gradually but drastically. In particular, for the really drastic change of information, we introduce the winner-lose-all (WLA) to drastically destroy the supposedly most important information for immediately reaching or leaping to intrinsic information, hidden in complicated inputs. The method was applied to a target-marketing problem. The experimental results show that, with the new method, multi-layered neural networks had an ability to disentangle complicated network configurations into the simplest ones with simple and independent correlation coefficients between inputs and targets. This was realized by drastically changing the information content in the course of learning and, correspondingly, by mixing regular and irregular properties over connection weights.
{"title":"Destructive computing with winner-lose-all competition in multi-layered neural networks","authors":"R. Kamimura","doi":"10.3233/his-230011","DOIUrl":"https://doi.org/10.3233/his-230011","url":null,"abstract":"The present paper aims to propose a new learning method based on destructive computing, contrary to the conventional progressive computing or the steady-step learning. In spite of the existence of a large amount of biased or distorted information in inputs, the conventional learning methods fundamentally aim to gradually acquire information that is as faithful as possible to inputs, which has prevented us from acquiring intrinsic information hidden in the deepest level of inputs. At this time, it is permitted to suppose a leap to that level by changing information at hand not gradually but drastically. In particular, for the really drastic change of information, we introduce the winner-lose-all (WLA) to drastically destroy the supposedly most important information for immediately reaching or leaping to intrinsic information, hidden in complicated inputs. The method was applied to a target-marketing problem. The experimental results show that, with the new method, multi-layered neural networks had an ability to disentangle complicated network configurations into the simplest ones with simple and independent correlation coefficients between inputs and targets. This was realized by drastically changing the information content in the course of learning and, correspondingly, by mixing regular and irregular properties over connection weights.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46847788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elaine Pinto Portela, O. Cortes, Josenildo Costa da Silva
The world recently has faced the COVID-19 pandemic, a disease caused by the severe acute respiratory syndrome. The main features of this disease are the rapid spread and high-level mortality. The illness led to the rapid development of a vaccine that we know can fight against the virus; however, we do not know the actual vaccine’s effectiveness. Thus, the early detection of the disease is still necessary to provide a suitable course of action. To help with early detection, intelligent methods such as machine learning and computational intelligence associated with computer vision algorithms can be used in a fast and efficient classification process, especially using ensemble methods that present similar efficiency to traditional machine learning algorithms in the worst-case scenario. In this context, this review aims to answer four questions: (i) the most used ensemble technique, (ii) the accuracy those methods reached, (iii) the classes involved in the classification task, (iv) the main machine learning algorithms and models, and (v) the dataset used in the experiments.
{"title":"A rapid literature review on ensemble algorithms for COVID-19 classification using image-based exams","authors":"Elaine Pinto Portela, O. Cortes, Josenildo Costa da Silva","doi":"10.3233/his-230009","DOIUrl":"https://doi.org/10.3233/his-230009","url":null,"abstract":"The world recently has faced the COVID-19 pandemic, a disease caused by the severe acute respiratory syndrome. The main features of this disease are the rapid spread and high-level mortality. The illness led to the rapid development of a vaccine that we know can fight against the virus; however, we do not know the actual vaccine’s effectiveness. Thus, the early detection of the disease is still necessary to provide a suitable course of action. To help with early detection, intelligent methods such as machine learning and computational intelligence associated with computer vision algorithms can be used in a fast and efficient classification process, especially using ensemble methods that present similar efficiency to traditional machine learning algorithms in the worst-case scenario. In this context, this review aims to answer four questions: (i) the most used ensemble technique, (ii) the accuracy those methods reached, (iii) the classes involved in the classification task, (iv) the main machine learning algorithms and models, and (v) the dataset used in the experiments.","PeriodicalId":88526,"journal":{"name":"International journal of hybrid intelligent systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42163598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}