Research on hands-free input methods has been actively conducted. However, most of the previous methods are difficult to use at any time in daily life due to using speech sounds or body movements. In this study, to realize a hands-free input method based on nasal breath using wearable devices, we propose a method for recognizing nasal breath gestures, using piezoelectric elements placed on the nosepiece of a glasses-type device. In the proposed method, nasal vibrations generated by nasal breath are acquired as sound data from the devices. Next, the breath pattern is recognized based on the factors of breath count, time interval, and intensity. We implemented a prototype system. The evaluation results for 10 subjects showed that the proposed method can recognize eight types of nasal breath gestures at 0.82% of F-value. The evaluation results also showed that the recognition accuracy is increased to more than 90% by limiting gestures to those with a different breath count or different breath interval. Our study provides the first glasses type wearable sensing technology that uses nasal breathing for hands-free input.
{"title":"Nasal Breath Input: Exploring Nasal Breath Input Method for Hands-Free Input by Using a Glasses Type Device with Piezoelectric Elements","authors":"Ryoma Ogawa, Kyosuke Futami, Kazuya Murao","doi":"10.26421/jdi3.4-2","DOIUrl":"https://doi.org/10.26421/jdi3.4-2","url":null,"abstract":"Research on hands-free input methods has been actively conducted. However, most of the previous methods are difficult to use at any time in daily life due to using speech sounds or body movements. In this study, to realize a hands-free input method based on nasal breath using wearable devices, we propose a method for recognizing nasal breath gestures, using piezoelectric elements placed on the nosepiece of a glasses-type device. In the proposed method, nasal vibrations generated by nasal breath are acquired as sound data from the devices. Next, the breath pattern is recognized based on the factors of breath count, time interval, and intensity. We implemented a prototype system. The evaluation results for 10 subjects showed that the proposed method can recognize eight types of nasal breath gestures at 0.82% of F-value. The evaluation results also showed that the recognition accuracy is increased to more than 90% by limiting gestures to those with a different breath count or different breath interval. Our study provides the first glasses type wearable sensing technology that uses nasal breathing for hands-free input.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74052714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teresa Alcamo, A. Cuzzocrea, G. Pilato, Daniele Schicchi
We analyze and compare five deep-learning neural architectures to manage the problem of irony and sarcasm detection for the Italian language. We briefly analyze the model architectures to choose the best compromise between performances and complexity. The obtained results show the effectiveness of such systems to handle the problem by achieving 93% of F1-Score in the best case. As a case study, we also illustrate a possible embedding of the neural systems in a cloud computing infrastructure to exploit the computational advantage of using such an approach in tackling big data.
{"title":"Sentiment Mining and Analysis over Text Corpora via Complex Deep Learning Naural Architectures","authors":"Teresa Alcamo, A. Cuzzocrea, G. Pilato, Daniele Schicchi","doi":"10.26421/jdi2.4-4","DOIUrl":"https://doi.org/10.26421/jdi2.4-4","url":null,"abstract":"We analyze and compare five deep-learning neural architectures to manage the problem of irony and sarcasm detection for the Italian language. We briefly analyze the model architectures to choose the best compromise between performances and complexity. The obtained results show the effectiveness of such systems to handle the problem by achieving 93% of F1-Score in the best case. As a case study, we also illustrate a possible embedding of the neural systems in a cloud computing infrastructure to exploit the computational advantage of using such an approach in tackling big data.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73467199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-16DOI: 10.22044/JADM.2021.10718.2208
E. Feli, R. Hosseini, S. Yazdani
In Vitro Fertilization (IVF) is one of the scientifically known methods of infertility treatment. This study aimed at improving the performance of predicting the success of IVF using machine learning and its optimization through evolutionary algorithms. The Multilayer Perceptron Neural Network (MLP) were proposed to classify the infertility dataset. The Genetic algorithm was used to improve the performance of the Multilayer Perceptron Neural Network model. The proposed model was applied to a dataset including 594 eggs from 94 patients undergoing IVF, of which 318 were of good quality embryos and 276 were of lower quality embryos. For performance evaluation of the MLP model, an ROC curve analysis was conducted, and 10-fold cross-validation performed. The results revealed that this intelligent model has high efficiency with an accuracy of 96% for Multi-layer Perceptron neural network, which is promising compared to counterparts methods.
{"title":"An Intelligent Model for Prediction of In-Vitro Fertilization Success using MLP Neural Network and GA Optimization","authors":"E. Feli, R. Hosseini, S. Yazdani","doi":"10.22044/JADM.2021.10718.2208","DOIUrl":"https://doi.org/10.22044/JADM.2021.10718.2208","url":null,"abstract":"In Vitro Fertilization (IVF) is one of the scientifically known methods of infertility treatment. This study aimed at improving the performance of predicting the success of IVF using machine learning and its optimization through evolutionary algorithms. The Multilayer Perceptron Neural Network (MLP) were proposed to classify the infertility dataset. The Genetic algorithm was used to improve the performance of the Multilayer Perceptron Neural Network model. The proposed model was applied to a dataset including 594 eggs from 94 patients undergoing IVF, of which 318 were of good quality embryos and 276 were of lower quality embryos. For performance evaluation of the MLP model, an ROC curve analysis was conducted, and 10-fold cross-validation performed. The results revealed that this intelligent model has high efficiency with an accuracy of 96% for Multi-layer Perceptron neural network, which is promising compared to counterparts methods.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49555833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-22DOI: 10.22044/JADM.2021.10253.2164
Ali Nozaripour, Hadi Soltanizadeh
Sparse representation due to advantages such as noise-resistant and, having a strong mathematical theory, has been noticed as a powerful tool in recent decades. In this paper, using the sparse representation, kernel trick, and a different technique of the Region of Interest (ROI) extraction which we had presented in our previous work, a new and robust method against rotation is introduced for dorsal hand vein recognition. In this method, to select the ROI, by changing the length and angle of the sides, undesirable effects of hand rotation during taking images have largely been neutralized. So, depending on the amount of hand rotation, ROI in each image will be different in size and shape. On the other hand, because of the same direction distribution on the dorsal hand vein patterns, we have used the kernel trick on sparse representation to classification. As a result, most samples with different classes but the same direction distribution will be classified properly. Using these two techniques, lead to introduce an effective method against hand rotation, for dorsal hand vein recognition. Increases of 2.26% in the recognition rate is observed for the proposed method when compared to the three conventional SRC-based algorithms and three classification methods based sparse coding that used dictionary learning.
{"title":"Robust Vein Recognition against Rotation Using Kernel Sparse Representation","authors":"Ali Nozaripour, Hadi Soltanizadeh","doi":"10.22044/JADM.2021.10253.2164","DOIUrl":"https://doi.org/10.22044/JADM.2021.10253.2164","url":null,"abstract":"Sparse representation due to advantages such as noise-resistant and, having a strong mathematical theory, has been noticed as a powerful tool in recent decades. In this paper, using the sparse representation, kernel trick, and a different technique of the Region of Interest (ROI) extraction which we had presented in our previous work, a new and robust method against rotation is introduced for dorsal hand vein recognition. In this method, to select the ROI, by changing the length and angle of the sides, undesirable effects of hand rotation during taking images have largely been neutralized. So, depending on the amount of hand rotation, ROI in each image will be different in size and shape. On the other hand, because of the same direction distribution on the dorsal hand vein patterns, we have used the kernel trick on sparse representation to classification. As a result, most samples with different classes but the same direction distribution will be classified properly. Using these two techniques, lead to introduce an effective method against hand rotation, for dorsal hand vein recognition. Increases of 2.26% in the recognition rate is observed for the proposed method when compared to the three conventional SRC-based algorithms and three classification methods based sparse coding that used dictionary learning.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44463791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.22044/JADM.2021.10520.2191
H. Khodadadi, V. Derhami
A prominent weakness of dynamic programming methods is that they perform operations throughout the entire set of states in a Markov decision process in every updating phase. This paper proposes a novel chaos-based method to solve the problem. For this purpose, a chaotic system is first initialized, and the resultant numbers are mapped onto the environment states through initial processing. In each traverse of the policy iteration method, policy evaluation is performed only once, and only a few states are updated. These states are proposed by the chaos system. In this method, the policy evaluation and improvement cycle lasts until an optimal policy is formulated in the environment. The same procedure is performed in the value iteration method, and only the values of a few states proposed by the chaos are updated in each traverse, whereas the values of other states are left unchanged. Unlike the conventional methods, an optimal solution can be obtained in the proposed method by only updating a limited number of states which are properly distributed all over the environment by chaos. The test results indicate the improved speed and efficiency of chaotic dynamic programming methods in obtaining the optimal solution in different grid environments.
{"title":"Improving Speed and Efficiency of Dynamic Programming Methods through Chaos","authors":"H. Khodadadi, V. Derhami","doi":"10.22044/JADM.2021.10520.2191","DOIUrl":"https://doi.org/10.22044/JADM.2021.10520.2191","url":null,"abstract":"A prominent weakness of dynamic programming methods is that they perform operations throughout the entire set of states in a Markov decision process in every updating phase. This paper proposes a novel chaos-based method to solve the problem. For this purpose, a chaotic system is first initialized, and the resultant numbers are mapped onto the environment states through initial processing. In each traverse of the policy iteration method, policy evaluation is performed only once, and only a few states are updated. These states are proposed by the chaos system. In this method, the policy evaluation and improvement cycle lasts until an optimal policy is formulated in the environment. The same procedure is performed in the value iteration method, and only the values of a few states proposed by the chaos are updated in each traverse, whereas the values of other states are left unchanged. Unlike the conventional methods, an optimal solution can be obtained in the proposed method by only updating a limited number of states which are properly distributed all over the environment by chaos. The test results indicate the improved speed and efficiency of chaotic dynamic programming methods in obtaining the optimal solution in different grid environments.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49057333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-31DOI: 10.22044/JADM.2021.10018.2138
A. Damia, M. Esnaashari, Mohammadreza Parvizimosaed
In the structural software test, test data generation is essential. The problem of generating test data is a search problem, and for solving the problem, search algorithms can be used. Genetic algorithm is one of the most widely used algorithms in this field. Adjusting genetic algorithm parameters helps to increase the effectiveness of this algorithm. In this paper, the Adaptive Genetic Algorithm (AGA) is used to maintain the diversity of the population to test data generation based on path coverage criterion, which calculates the rate of recombination and mutation with the similarity between chromosomes and the amount of chromosome fitness during and around each algorithm. Experiments have shown that this method is faster for generating test data than other versions of the genetic algorithm used by others.
{"title":"Software Testing using an Adaptive Genetic Algorithm","authors":"A. Damia, M. Esnaashari, Mohammadreza Parvizimosaed","doi":"10.22044/JADM.2021.10018.2138","DOIUrl":"https://doi.org/10.22044/JADM.2021.10018.2138","url":null,"abstract":"In the structural software test, test data generation is essential. The problem of generating test data is a search problem, and for solving the problem, search algorithms can be used. Genetic algorithm is one of the most widely used algorithms in this field. Adjusting genetic algorithm parameters helps to increase the effectiveness of this algorithm. In this paper, the Adaptive Genetic Algorithm (AGA) is used to maintain the diversity of the population to test data generation based on path coverage criterion, which calculates the rate of recombination and mutation with the similarity between chromosomes and the amount of chromosome fitness during and around each algorithm. Experiments have shown that this method is faster for generating test data than other versions of the genetic algorithm used by others.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43271571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-31DOI: 10.22044/JADM.2021.10837.2224
Elham Pejhan, M. Ghasemzadeh
This research is related to the development of technology in the field of automatic text to image generation. In this regard, two main goals are pursued; first, the generated image should look as real as possible; and second, the generated image should be a meaningful description of the input text. our proposed method is a Multi Sentences Hierarchical GAN (MSH-GAN) for text to image generation. In this research project, we have considered two main strategies: 1) produce a higher quality image in the first step, and 2) use two additional descriptions to improve the original image in the next steps. Our goal is to focus on using more information to generate images with higher resolution by using more than one sentence input text. We have proposed different models based on GANs and Memory Networks. We have also used more challenging dataset called ids-ade. This is the first time; this dataset has been used in this area. We have evaluated our models based on IS, FID and, R-precision evaluation metrics. Experimental results demonstrate that our best model performs favorably against the basic state-of-the-art approaches like StackGAN and AttGAN.
{"title":"Multi-Sentence Hierarchical Generative Adversarial Network GAN (MSH-GAN) for Automatic Text-to-Image Generation","authors":"Elham Pejhan, M. Ghasemzadeh","doi":"10.22044/JADM.2021.10837.2224","DOIUrl":"https://doi.org/10.22044/JADM.2021.10837.2224","url":null,"abstract":"This research is related to the development of technology in the field of automatic text to image generation. In this regard, two main goals are pursued; first, the generated image should look as real as possible; and second, the generated image should be a meaningful description of the input text. our proposed method is a Multi Sentences Hierarchical GAN (MSH-GAN) for text to image generation. In this research project, we have considered two main strategies: 1) produce a higher quality image in the first step, and 2) use two additional descriptions to improve the original image in the next steps. Our goal is to focus on using more information to generate images with higher resolution by using more than one sentence input text. We have proposed different models based on GANs and Memory Networks. We have also used more challenging dataset called ids-ade. This is the first time; this dataset has been used in this area. We have evaluated our models based on IS, FID and, R-precision evaluation metrics. Experimental results demonstrate that our best model performs favorably against the basic state-of-the-art approaches like StackGAN and AttGAN.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48550603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-31DOI: 10.22044/JADM.2021.10471.2186
M. Nasiri, H. Rahmani
Determining the personality dimensions of individuals is very important in psychological research. The most well-known example of personality dimensions is the Five-Factor Model (FFM). There are two approaches 1- Manual and 2- Automatic for determining the personality dimensions. In a manual approach, Psychologists discover these dimensions through personality questionnaires. As an automatic way, varied personal input types (textual/image/video) of people are gathered and analyzed for this purpose. In this paper, we proposed a method called DENOVA (DEep learning based on the ANOVA), which predicts FFM using deep learning based on the Analysis of variance (ANOVA) of words. For this purpose, DENOVA first applies ANOVA to select the most informative terms. Then, DENOVA employs Word2Vec to extract document embeddings. Finally, DENOVA uses Support Vector Machine (SVM), Logistic Regression, XGBoost, and Multilayer perceptron (MLP) as classifiers to predict FFM. The experimental results show that DENOVA outperforms on average, 6.91%, the state-of-the-art methods in predicting FFM with respect to accuracy.
{"title":"DENOVA: Predicting Five-Factor Model using Deep Learning based on ANOVA","authors":"M. Nasiri, H. Rahmani","doi":"10.22044/JADM.2021.10471.2186","DOIUrl":"https://doi.org/10.22044/JADM.2021.10471.2186","url":null,"abstract":"Determining the personality dimensions of individuals is very important in psychological research. The most well-known example of personality dimensions is the Five-Factor Model (FFM). There are two approaches 1- Manual and 2- Automatic for determining the personality dimensions. In a manual approach, Psychologists discover these dimensions through personality questionnaires. As an automatic way, varied personal input types (textual/image/video) of people are gathered and analyzed for this purpose. In this paper, we proposed a method called DENOVA (DEep learning based on the ANOVA), which predicts FFM using deep learning based on the Analysis of variance (ANOVA) of words. For this purpose, DENOVA first applies ANOVA to select the most informative terms. Then, DENOVA employs Word2Vec to extract document embeddings. Finally, DENOVA uses Support Vector Machine (SVM), Logistic Regression, XGBoost, and Multilayer perceptron (MLP) as classifiers to predict FFM. The experimental results show that DENOVA outperforms on average, 6.91%, the state-of-the-art methods in predicting FFM with respect to accuracy.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45180102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-23DOI: 10.22044/JADM.2021.10200.2158
Somayye Bayatpour, S. Hasheminejad
Most of the methods proposed for segmenting image objects are supervised methods which are costly due to their need for large amounts of labeled data. However, in this article, we have presented a method for segmenting objects based on a meta-heuristic optimization which does not need any training data. This procedure consists of two main stages of edge detection and texture analysis. In the edge detection stage, we have utilized invasive weed optimization (IWO) and local thresholding. Edge detection methods that are based on local histograms are efficient methods, but it is very difficult to determine the desired parameters manually. In addition, these parameters must be selected specifically for each image. In this paper, a method is presented for automatic determination of these parameters using an evolutionary algorithm. Evaluation of this method demonstrates its high performance on natural images.
{"title":"Object Segmentation using Local Histograms, Invasive Weed Optimization Algorithm and Texture Analysis","authors":"Somayye Bayatpour, S. Hasheminejad","doi":"10.22044/JADM.2021.10200.2158","DOIUrl":"https://doi.org/10.22044/JADM.2021.10200.2158","url":null,"abstract":"Most of the methods proposed for segmenting image objects are supervised methods which are costly due to their need for large amounts of labeled data. However, in this article, we have presented a method for segmenting objects based on a meta-heuristic optimization which does not need any training data. This procedure consists of two main stages of edge detection and texture analysis. In the edge detection stage, we have utilized invasive weed optimization (IWO) and local thresholding. Edge detection methods that are based on local histograms are efficient methods, but it is very difficult to determine the desired parameters manually. In addition, these parameters must be selected specifically for each image. In this paper, a method is presented for automatic determination of these parameters using an evolutionary algorithm. Evaluation of this method demonstrates its high performance on natural images.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47068859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-24DOI: 10.22044/JADM.2021.10117.2149
F. Yazdi, M. Hosseinzadeh, S. Jabbehdari
Wireless body area networks (WBAN) are innovative technologies that have been the anticipation greatly promote healthcare monitoring systems. All WBAN included biomedical sensors that can be worn on or implanted in the body. Sensors are monitoring vital signs and then processing the data and transmitting to the central server. Biomedical sensors are limited in energy resources and need an improved design for managing energy consumption. Therefore, DTEC-MAC (Diverse Traffic with Energy Consumption-MAC) is proposed based on the priority of data classification in the cluster nodes and provides medical data based on energy management. The proposed method uses fuzzy logic based on the distance to sink and the remaining energy and length of data to select the cluster head. MATLAB software was used to simulate the method. This method compared with similar methods called iM-SIMPLE and M-ATTEMPT, ERP. Results of the simulations indicate that it works better to extend the lifetime and guarantee minimum energy and packet delivery rates, maximizing the throughput.
无线身体区域网络(WBAN)是一种创新技术,有望极大地促进医疗保健监测系统的发展。所有WBAN都包括可以佩戴或植入体内的生物医学传感器。传感器正在监测生命体征,然后处理数据并传输到中央服务器。生物医学传感器的能源有限,需要改进设计以管理能源消耗。因此,基于集群节点中数据分类的优先级,提出了DTEC-MAC(Diverse Traffic with Energy Consumption MAC),并基于能量管理提供医疗数据。该方法使用基于下沉距离和剩余能量和数据长度的模糊逻辑来选择簇头。利用MATLAB软件对该方法进行了仿真。该方法与iM SIMPLE和M-ATTEMPT、ERP的类似方法进行了比较。仿真结果表明,它能更好地延长寿命,保证最小的能量和数据包传输速率,最大限度地提高吞吐量。
{"title":"DTEC-MAC: Diverse Traffic with Guarantee Energy Consumption for MAC in Wireless Body Area Networks","authors":"F. Yazdi, M. Hosseinzadeh, S. Jabbehdari","doi":"10.22044/JADM.2021.10117.2149","DOIUrl":"https://doi.org/10.22044/JADM.2021.10117.2149","url":null,"abstract":"Wireless body area networks (WBAN) are innovative technologies that have been the anticipation greatly promote healthcare monitoring systems. All WBAN included biomedical sensors that can be worn on or implanted in the body. Sensors are monitoring vital signs and then processing the data and transmitting to the central server. Biomedical sensors are limited in energy resources and need an improved design for managing energy consumption. Therefore, DTEC-MAC (Diverse Traffic with Energy Consumption-MAC) is proposed based on the priority of data classification in the cluster nodes and provides medical data based on energy management. The proposed method uses fuzzy logic based on the distance to sink and the remaining energy and length of data to select the cluster head. MATLAB software was used to simulate the method. This method compared with similar methods called iM-SIMPLE and M-ATTEMPT, ERP. Results of the simulations indicate that it works better to extend the lifetime and guarantee minimum energy and packet delivery rates, maximizing the throughput.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47794849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}