In this paper, we present an analysis of microscopic behaviors of ants to understand ant interactions that lead to jam-free ant traffic. For the analysis here, we use an agent-based model of ant traffic and mathematical analysis of key scenarios on the ant trail to understand relations between ants' environment and the interactions of ants with it. Our analysis indicates that ants increase their velocity for decreasing headway, which leads to a peculiar jam absorption mechanism. We also show that ants on trail collect information about flow in the recent past, which allows them to make informed decisions about their travel. Based on our observation, we propose that mimicking an ant communication system could help individuals in man-made transportation systems to make better decisions for higher efficiency, which could improve the efficiency of overall system.
{"title":"Analysis of Microscopic Behavior in Ant Traffic to Understand Jam-free Transportation","authors":"P. Kasture, H. Nishimura","doi":"10.1145/3396474.3396491","DOIUrl":"https://doi.org/10.1145/3396474.3396491","url":null,"abstract":"In this paper, we present an analysis of microscopic behaviors of ants to understand ant interactions that lead to jam-free ant traffic. For the analysis here, we use an agent-based model of ant traffic and mathematical analysis of key scenarios on the ant trail to understand relations between ants' environment and the interactions of ants with it. Our analysis indicates that ants increase their velocity for decreasing headway, which leads to a peculiar jam absorption mechanism. We also show that ants on trail collect information about flow in the recent past, which allows them to make informed decisions about their travel. Based on our observation, we propose that mimicking an ant communication system could help individuals in man-made transportation systems to make better decisions for higher efficiency, which could improve the efficiency of overall system.","PeriodicalId":408084,"journal":{"name":"Proceedings of the 2020 4th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123718155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The growth of ubiquitous healthcare systems, particularly for general and residential healthcare, is increasing dramatically. One of the most significant components of such systems is the gateway, which acts as a middleware between Internet of Things (IoT) devices and cloud application services. Here, we propose an IoT edge gateway framework based on docker container technology as the legacy virtualization technology to empower microservice architectures for aiding multiple-device real-time monitoring in locations such as nursing homes and residential care centers. The framework is used to identify IoT devices and the gateway itself in home networks for restricting access only to authorized users and non-manipulated devices. We propose the use of state-of-the-art hardware-based security supporting the mutual authentication process with the Elliptic Curve Digital Signature Algorithm as well as integrity protection to validate the device, gateway, and cloud platform integrity to identify manipulation and detect unauthorized changes by signing these data. This approach can prevent man-in-the-middle attacks. As a result, we can implement each service located in this proposed IoT edge gateway framework to enhance capabilities in edge analytics by adding hardened security with the average latency time at 2.373 ms.
{"title":"Healthcare Center IoT Edge Gateway Based on Containerized Microservices","authors":"Wiroon Sriborrirux, Peeradach Laortum","doi":"10.1145/3396474.3396495","DOIUrl":"https://doi.org/10.1145/3396474.3396495","url":null,"abstract":"The growth of ubiquitous healthcare systems, particularly for general and residential healthcare, is increasing dramatically. One of the most significant components of such systems is the gateway, which acts as a middleware between Internet of Things (IoT) devices and cloud application services. Here, we propose an IoT edge gateway framework based on docker container technology as the legacy virtualization technology to empower microservice architectures for aiding multiple-device real-time monitoring in locations such as nursing homes and residential care centers. The framework is used to identify IoT devices and the gateway itself in home networks for restricting access only to authorized users and non-manipulated devices. We propose the use of state-of-the-art hardware-based security supporting the mutual authentication process with the Elliptic Curve Digital Signature Algorithm as well as integrity protection to validate the device, gateway, and cloud platform integrity to identify manipulation and detect unauthorized changes by signing these data. This approach can prevent man-in-the-middle attacks. As a result, we can implement each service located in this proposed IoT edge gateway framework to enhance capabilities in edge analytics by adding hardened security with the average latency time at 2.373 ms.","PeriodicalId":408084,"journal":{"name":"Proceedings of the 2020 4th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121432482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Studies on standard many-objective optimisation problems have indicated that multi-objective optimisation algorithms struggle to solve optimisation problems with more than three objectives, because many solutions become dominated. Therefore, the Paretodominance relation is no longer efficient in guiding the search to find an optimal Pareto front for many-objective optimisation problems. Recently, a partial dominance approach has been proposed to address the problem experienced with application of the dominance relation on many objectives. Preliminary results have illustrated that this partial dominance relation has promise, and scales well with an increase in the number of objectives. This paper conducts a more extensive empirical analysis of the partial dominance relation on a larger benchmark of difficult many-objective optimisation problems, in comparison to state-of-the-art algorithms. The results further illustrate that partial dominance is an efficient approach to solve many-objective optimisation problems.
{"title":"Empirical Analysis of A Partial Dominance Approach to Many-Objective Optimisation","authors":"A. Engelbrecht, Mardé Helbig","doi":"10.1145/3396474.3396483","DOIUrl":"https://doi.org/10.1145/3396474.3396483","url":null,"abstract":"Studies on standard many-objective optimisation problems have indicated that multi-objective optimisation algorithms struggle to solve optimisation problems with more than three objectives, because many solutions become dominated. Therefore, the Paretodominance relation is no longer efficient in guiding the search to find an optimal Pareto front for many-objective optimisation problems. Recently, a partial dominance approach has been proposed to address the problem experienced with application of the dominance relation on many objectives. Preliminary results have illustrated that this partial dominance relation has promise, and scales well with an increase in the number of objectives. This paper conducts a more extensive empirical analysis of the partial dominance relation on a larger benchmark of difficult many-objective optimisation problems, in comparison to state-of-the-art algorithms. The results further illustrate that partial dominance is an efficient approach to solve many-objective optimisation problems.","PeriodicalId":408084,"journal":{"name":"Proceedings of the 2020 4th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130892799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Saikat Bandopadhyay, Srijan Nag, Sujay Saha, A. Ghosh
Electroencephalogram (EEG) is an electrophysiological monitoring method to record the electrical activity of the brain. EEG is most often used to diagnose epilepsy, which causes abnormalities in EEG readings. It is also used to diagnose sleep disorders, depth of anesthesia, coma, encephalopathy, brain death, and depression. Being one of the prevalent psychiatric disorders, depressive episodes of major depressive disorder (MDD) is often misdiagnosed or overlooked. Therefore, identifying MDD at earlier stages of treatment could help to facilitate efficient and specific treatment. In this article, Random Forest (RF) and Ant Colony Optimization (ACO) algorithm are used to reduce the number of features by removing irrelevant and redundant features. The selected features are then fed into k-nearest neighbors (KNN) and SVM classifiers, a mathematical tool for data classification, regression, function estimation, and modeling processes, in order to classify MDD and non-MDD subjects. The proposed method used Wavelet Transformation (WT) to decompose the EEG data into corresponding frequency bands, like delta, theta, alpha, beta and gamma. A total of 119 participants were recruited by the University of Arizona from introductory psychology classes based on survey scores of the Beck Depression Inventory (BDI). The performance of KNN and SVM classifiers is measured first with all the features and then with selected significant features given by RF and ACO. It is possible to discriminate 44 MDD and 75 non-MDD subjects efficiently using 15 of 65 channels and 3 of 5 frequency bands to improve the performance, where the significant features are obtained by the RF method. It is found that the classification accuracy has been improved from70.21% and76.67% using all the features to the corresponding 91.67% and 83.33% with only significant features using KNN and Support Vector Machine (SVM) respectively.
{"title":"Identification of Major Depressive Disorder: Using Significant Features of EEG Signals Obtained by Random Forest and Ant Colony Optimization Methods","authors":"Saikat Bandopadhyay, Srijan Nag, Sujay Saha, A. Ghosh","doi":"10.1145/3396474.3396480","DOIUrl":"https://doi.org/10.1145/3396474.3396480","url":null,"abstract":"Electroencephalogram (EEG) is an electrophysiological monitoring method to record the electrical activity of the brain. EEG is most often used to diagnose epilepsy, which causes abnormalities in EEG readings. It is also used to diagnose sleep disorders, depth of anesthesia, coma, encephalopathy, brain death, and depression. Being one of the prevalent psychiatric disorders, depressive episodes of major depressive disorder (MDD) is often misdiagnosed or overlooked. Therefore, identifying MDD at earlier stages of treatment could help to facilitate efficient and specific treatment. In this article, Random Forest (RF) and Ant Colony Optimization (ACO) algorithm are used to reduce the number of features by removing irrelevant and redundant features. The selected features are then fed into k-nearest neighbors (KNN) and SVM classifiers, a mathematical tool for data classification, regression, function estimation, and modeling processes, in order to classify MDD and non-MDD subjects. The proposed method used Wavelet Transformation (WT) to decompose the EEG data into corresponding frequency bands, like delta, theta, alpha, beta and gamma. A total of 119 participants were recruited by the University of Arizona from introductory psychology classes based on survey scores of the Beck Depression Inventory (BDI). The performance of KNN and SVM classifiers is measured first with all the features and then with selected significant features given by RF and ACO. It is possible to discriminate 44 MDD and 75 non-MDD subjects efficiently using 15 of 65 channels and 3 of 5 frequency bands to improve the performance, where the significant features are obtained by the RF method. It is found that the classification accuracy has been improved from70.21% and76.67% using all the features to the corresponding 91.67% and 83.33% with only significant features using KNN and Support Vector Machine (SVM) respectively.","PeriodicalId":408084,"journal":{"name":"Proceedings of the 2020 4th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125227645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since the GWO (Grey wolf optimization) has some limitation in application to real-wold problems, such as slow convergence speed, low precision and it easily falls into the local minimal in the later stage of complex optimization problems, a novel grey wolf algorithm based on equalization mechanism (EmGWO) is proposed. In the proposed algorithm, the uniform distribution point set, equalization mechanism, and winning mechanism are used to enhance the searching ability of the grey wolf algorithm. Simulation based on well-known benchmark functions demonstrates the efficiency of the proposed EmGWO.
{"title":"An Enhanced Grey Wolf Algorithm Based on Equalization Mechanism","authors":"Yun-tao Zhao, Wei Mei, Weigang Li","doi":"10.1145/3396474.3396494","DOIUrl":"https://doi.org/10.1145/3396474.3396494","url":null,"abstract":"Since the GWO (Grey wolf optimization) has some limitation in application to real-wold problems, such as slow convergence speed, low precision and it easily falls into the local minimal in the later stage of complex optimization problems, a novel grey wolf algorithm based on equalization mechanism (EmGWO) is proposed. In the proposed algorithm, the uniform distribution point set, equalization mechanism, and winning mechanism are used to enhance the searching ability of the grey wolf algorithm. Simulation based on well-known benchmark functions demonstrates the efficiency of the proposed EmGWO.","PeriodicalId":408084,"journal":{"name":"Proceedings of the 2020 4th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124340234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laxman Bokati, O. Kosheleva, V. Kreinovich, Uram Anibal Sosa Aguirre
Several decades ago, traditional neural networks were the most efficient machine learning technique. Then it turned out that, in general, a different technique called support vector machines is more efficient. Reasonably recently, a new technique called deep learning has been shown to be the most efficient one. These are empirical observations, but how we explain them - thus making the corresponding conclusions more reliable? In this paper, we provide a possible theoretical explanation for the above-described empirical comparisons. This explanation enables us to explain yet another empirical fact - that sparsity techniques turned out to be very efficient in signal processing.
{"title":"Why Deep Learning Is More Efficient than Support Vector Machines, and How it is Related to Sparsity Techniques in Signal Processing","authors":"Laxman Bokati, O. Kosheleva, V. Kreinovich, Uram Anibal Sosa Aguirre","doi":"10.1145/3396474.3396478","DOIUrl":"https://doi.org/10.1145/3396474.3396478","url":null,"abstract":"Several decades ago, traditional neural networks were the most efficient machine learning technique. Then it turned out that, in general, a different technique called support vector machines is more efficient. Reasonably recently, a new technique called deep learning has been shown to be the most efficient one. These are empirical observations, but how we explain them - thus making the corresponding conclusions more reliable? In this paper, we provide a possible theoretical explanation for the above-described empirical comparisons. This explanation enables us to explain yet another empirical fact - that sparsity techniques turned out to be very efficient in signal processing.","PeriodicalId":408084,"journal":{"name":"Proceedings of the 2020 4th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127742943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Real-world networks are often extremely polarized, because the communication between groups of vertices can be weak and, most of the time, only vertices in the same groups or sharing the same beliefs communicate to each other. We formulate the Minimum-Cardinality Balanced Edge Addition Problem as a strategy for reducing polarization in real-world networks based on a principle of minimum external interventions. We give the integer programming formulation and discuss computational results on randomly generated and real-life instances. We show that polarization can be reduced to the desired threshold with the addition of a few edges. The minimum intervention principle and the approach developed in this work are shown to constitute an effective strategy for reducing polarization in social, interaction, and communication networks.
{"title":"Reducing Network Polarization by Edge Additions","authors":"Ruben Interian, Jorge Moreno, C. Ribeiro","doi":"10.1145/3396474.3396486","DOIUrl":"https://doi.org/10.1145/3396474.3396486","url":null,"abstract":"Real-world networks are often extremely polarized, because the communication between groups of vertices can be weak and, most of the time, only vertices in the same groups or sharing the same beliefs communicate to each other. We formulate the Minimum-Cardinality Balanced Edge Addition Problem as a strategy for reducing polarization in real-world networks based on a principle of minimum external interventions. We give the integer programming formulation and discuss computational results on randomly generated and real-life instances. We show that polarization can be reduced to the desired threshold with the addition of a few edges. The minimum intervention principle and the approach developed in this work are shown to constitute an effective strategy for reducing polarization in social, interaction, and communication networks.","PeriodicalId":408084,"journal":{"name":"Proceedings of the 2020 4th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence","volume":"799 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123285243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Successes of deep learning are partly due to appropriate selection of activation function, pooling functions, etc. Most of these choices have been made based on empirical comparison and heuristic ideas. In this paper, we show that many of these choices - and the surprising success of deep learning in the first place - can be explained by reasonably simple and natural mathematics.
{"title":"Deep Learning (Partly) Demystified","authors":"V. Kreinovich, O. Kosheleva","doi":"10.1145/3396474.3396481","DOIUrl":"https://doi.org/10.1145/3396474.3396481","url":null,"abstract":"Successes of deep learning are partly due to appropriate selection of activation function, pooling functions, etc. Most of these choices have been made based on empirical comparison and heuristic ideas. In this paper, we show that many of these choices - and the surprising success of deep learning in the first place - can be explained by reasonably simple and natural mathematics.","PeriodicalId":408084,"journal":{"name":"Proceedings of the 2020 4th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132089641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes an approach for solving a tardiness constrained flow shop with simultaneously loaded stations using a Genetic Algorithm (GA). This industrial based problem is modeled from a filter basket production line and is generally solved using deterministic algorithms. An evolutionary approach is utilized in this paper to improve the tardiness and illustrate better consistent results. A total of 120 different problem instances in six test cases are randomly generated to mimic conditions, which occur at industrial practice and solved using 22 different GA scenarios. These results are compared with four standard benchmark priority rule based algorithms of First in First Out (FIFO), Raghu and Rajendran (RR), Shortest Processing Time (SPT) and Slack. From all the obtained results, GA was found to consistently outperform all compared algorithms for all the problem instances.
{"title":"Scheduling Tardiness Constrained Flow Shop with Simultaneously Loaded Stations Using Genetic Algorithm","authors":"D. Davendra, F. Hermann, M. Bialic-Davendra","doi":"10.1145/3396474.3396475","DOIUrl":"https://doi.org/10.1145/3396474.3396475","url":null,"abstract":"This paper describes an approach for solving a tardiness constrained flow shop with simultaneously loaded stations using a Genetic Algorithm (GA). This industrial based problem is modeled from a filter basket production line and is generally solved using deterministic algorithms. An evolutionary approach is utilized in this paper to improve the tardiness and illustrate better consistent results. A total of 120 different problem instances in six test cases are randomly generated to mimic conditions, which occur at industrial practice and solved using 22 different GA scenarios. These results are compared with four standard benchmark priority rule based algorithms of First in First Out (FIFO), Raghu and Rajendran (RR), Shortest Processing Time (SPT) and Slack. From all the obtained results, GA was found to consistently outperform all compared algorithms for all the problem instances.","PeriodicalId":408084,"journal":{"name":"Proceedings of the 2020 4th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125388711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruba Almahasneh, Boldizsar Tuu-Szabo, P. Földesi, L. Kóczy
The Time Dependent Traveling Salesman Problem (TD TSP) is an extension of the classic Traveling Salesman Problem towards more realistic conditions. TSP is one of the most extensively studied NP-complete graph search problems. In TD TSP, the edges are assigned different weights, depending on whether they are traveled in the traffic jam regions (such as busy city centers) and during rush hour periods, or not. In such circumstances, edges are assigned higher costs, expressed by a multiplying factor. In this paper, we introduce a novel and even more realistic approach, the Interval Intuitionistic Fuzzy Time Dependent Traveling Salesman Problem (IVIFTD TSP); which is a further extension of the classic TD TSP, with the additional notion of deploying interval valued intuitionistic fuzzy for describing uncertainties. The core concept employs interval valued intuitionistic fuzzy sets for quantifying the traffic jam regions, and the rush hour periods loss (those are additional costs of the travel between nodes), which are always uncertain in real life. Since type-2 (such as inter valued) fuzzy sets have the potential to provide better performance in modeling problems with higher uncertainties than the traditional fuzzy set, the new approach it may be considered as an extended, practically more applicable, extended version of the original abstract problem. The optimization of such a complex model is obviously very difficult; it is a mathematically intractable problem. However, the Discrete Bacterial Memetic Evolutionary Algorithm proposed earlier by the authors' team has shown sufficient efficiency, general applicability for similar type problems and good predictability in terms of problem size, thus it is applied for the optimization of the concrete instances.
{"title":"Extension of the Time Dependent Travelling Salesman Problem with Interval Valued Intuitionistic Fuzzy Model Applying Memetic Optimization Algorithm","authors":"Ruba Almahasneh, Boldizsar Tuu-Szabo, P. Földesi, L. Kóczy","doi":"10.1145/3396474.3396490","DOIUrl":"https://doi.org/10.1145/3396474.3396490","url":null,"abstract":"The Time Dependent Traveling Salesman Problem (TD TSP) is an extension of the classic Traveling Salesman Problem towards more realistic conditions. TSP is one of the most extensively studied NP-complete graph search problems. In TD TSP, the edges are assigned different weights, depending on whether they are traveled in the traffic jam regions (such as busy city centers) and during rush hour periods, or not. In such circumstances, edges are assigned higher costs, expressed by a multiplying factor. In this paper, we introduce a novel and even more realistic approach, the Interval Intuitionistic Fuzzy Time Dependent Traveling Salesman Problem (IVIFTD TSP); which is a further extension of the classic TD TSP, with the additional notion of deploying interval valued intuitionistic fuzzy for describing uncertainties. The core concept employs interval valued intuitionistic fuzzy sets for quantifying the traffic jam regions, and the rush hour periods loss (those are additional costs of the travel between nodes), which are always uncertain in real life. Since type-2 (such as inter valued) fuzzy sets have the potential to provide better performance in modeling problems with higher uncertainties than the traditional fuzzy set, the new approach it may be considered as an extended, practically more applicable, extended version of the original abstract problem. The optimization of such a complex model is obviously very difficult; it is a mathematically intractable problem. However, the Discrete Bacterial Memetic Evolutionary Algorithm proposed earlier by the authors' team has shown sufficient efficiency, general applicability for similar type problems and good predictability in terms of problem size, thus it is applied for the optimization of the concrete instances.","PeriodicalId":408084,"journal":{"name":"Proceedings of the 2020 4th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114262034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}