首页 > 最新文献

International Journal of Applied Mathematics Electronics and Computers最新文献

英文 中文
Analysis and detection of Titanic survivors using generalized linear models and decision tree algorithm 基于广义线性模型和决策树算法的泰坦尼克号幸存者分析与检测
Pub Date : 2020-10-09 DOI: 10.18100/ijamec.785297
Burcu Durmuş, Ö. I. Güneri
In the article, it is aimed to investigate the factors affecting survival in today's legendary giant accident with different methods. The analysis aims to find the method that best determines survival. For this purpose, logit and probit models from generalized linear models and random tree algorithm from decision tree methods were used. The study was carried out in two stages. Firstly; in the analysis made with generalized linear models, variables that did not contribute significantly to the model were determined. Classification accuracy was found to be 79.89% for the logit model and 79.04% for the probit model. In the second stage; classification analysis was performed with random tree decision trees. Classification accuracy was determined to be 77.21%. In addition; according to the results obtained from the generalized linear models, the classification analysis was repeated by removing the data that made meaningless contribution to the model. The classification rate increased by 4.36% and reached 81.57%. After all; It was determined that the decision tree analysis made with the variables extracted from the model gave better results than the analysis made with the original variables. These results are thought to be useful for researchers working on classification analysis. In addition, the results can be used for purposes such as data preprocessing, data cleaning.
在本文中,旨在用不同的方法来研究影响当今传奇巨人事故中生存的因素。分析的目的是找到最能决定生存的方法。为此,使用了广义线性模型中的logit和probit模型以及决策树方法中的随机树算法。这项研究分两个阶段进行。首先;在用广义线性模型进行的分析中,确定了对模型没有显著贡献的变量。logit模型的分类准确率为79.89%,probit模型的分类准确率为79.04%。在第二阶段;采用随机树决策树进行分类分析。分类准确率为77.21%。除了;根据广义线性模型得到的结果,剔除对模型无意义贡献的数据,重复分类分析。分类率提高了4.36%,达到81.57%。毕竟;结果表明,用模型中提取的变量进行决策树分析的结果优于用原始变量进行决策树分析的结果。这些结果被认为对从事分类分析的研究人员很有用。此外,其结果还可用于数据预处理、数据清理等目的。
{"title":"Analysis and detection of Titanic survivors using generalized linear models and decision tree algorithm","authors":"Burcu Durmuş, Ö. I. Güneri","doi":"10.18100/ijamec.785297","DOIUrl":"https://doi.org/10.18100/ijamec.785297","url":null,"abstract":"In the article, it is aimed to investigate the factors affecting survival in today's legendary giant accident with different methods. The analysis aims to find the method that best determines survival. For this purpose, logit and probit models from generalized linear models and random tree algorithm from decision tree methods were used. The study was carried out in two stages. Firstly; in the analysis made with generalized linear models, variables that did not contribute significantly to the model were determined. Classification accuracy was found to be 79.89% for the logit model and 79.04% for the probit model. In the second stage; classification analysis was performed with random tree decision trees. Classification accuracy was determined to be 77.21%. In addition; according to the results obtained from the generalized linear models, the classification analysis was repeated by removing the data that made meaningless contribution to the model. The classification rate increased by 4.36% and reached 81.57%. After all; It was determined that the decision tree analysis made with the variables extracted from the model gave better results than the analysis made with the original variables. These results are thought to be useful for researchers working on classification analysis. In addition, the results can be used for purposes such as data preprocessing, data cleaning.","PeriodicalId":120305,"journal":{"name":"International Journal of Applied Mathematics Electronics and Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116110536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Classification of Emg Signals Using Convolution Neural Network 基于卷积神经网络的肌电信号分类
Pub Date : 2020-10-09 DOI: 10.18100/ijamec.795227
Kaan Bakircioğlu, Nalan Özkurt
An electrical signal is produced by the contraction of the muscles; this electrical signal contains information about the muscles, the recording of these signals called electromyography (EMG). This information is often used in studies such as prosthetic arm, muscle damage detection, and motion detection. Classifiers such as artificial neural networks, support vector machines are generally used for the classification of EMG signals. Despite successful results with such methods the extraction of the features to be given to the classifiers and the selection of the features affect the classification success. In this study, it is aimed to increase the success of the classification of the daily used hand movements using the Convolutional neural networks (CNN). The advantage of the deep learning techniques like CNN is that the relationships in big data are learned by the network. Firstly, the received EMG signals for forearms are windowed to increase the number of data and focus on the contraction points. Then, to compare the success rate, raw signals, Fourier transform of the signal, the root means square, and the Empirical mode decomposition (EMD) is applied to the signal and intrinsic mode functions are obtained. These signals are given to four different CNN. Afterward, to find the most efficient parameters, the results were obtained by splitting data set into three as 70% training set, 15% validation set, and 15% test set. 5 cross-validations have been applied to assess the system’s performance. The best results are obtained from the CNN, which receive the EMD applied signal as input. The result obtained with the cross-validation is 95.90% and the result obtained with the other separation method is 93.70%. When the results were examined, it was seen that CNN is a promising classifier even the raw signal is applied to the classifier. Also, it has been observed that EMD method creates better accuracy of classification. This is an open access article under the CC BY-SA 4.0 license. (https://creativecommons.org/licenses/by-sa/4.0/)
肌肉收缩时产生电信号;这种电信号包含有关肌肉的信息,这些信号的记录称为肌电图(EMG)。这些信息经常用于假肢、肌肉损伤检测和运动检测等研究中。分类器如人工神经网络、支持向量机等通常用于肌电信号的分类。尽管这些方法取得了成功的结果,但要给分类器的特征的提取和特征的选择影响分类的成功。在这项研究中,它旨在提高使用卷积神经网络(CNN)对日常使用的手部动作进行分类的成功率。像CNN这样的深度学习技术的优势在于,大数据中的关系是由网络学习的。首先,对接收到的前臂肌电信号进行窗口化处理,增加数据量,聚焦于收缩点;然后,为了比较成功率,将原始信号、信号的傅里叶变换、均方根和经验模态分解(EMD)应用于信号,得到固有模态函数。这些信号被传递给四个不同的CNN。之后,为了找到最有效的参数,将数据集分成三个部分,分别是70%的训练集、15%的验证集和15%的测试集,得到结果。已应用5个交叉验证来评估系统的性能。以EMD应用信号为输入的CNN得到了最好的效果。交叉验证的结果为95.90%,另一种分离方法的结果为93.70%。当对结果进行检查时,可以看出即使将原始信号应用到分类器中,CNN仍然是一个很有前途的分类器。同时,也观察到EMD方法具有更好的分类精度。这是一篇基于CC BY-SA 4.0许可的开放获取文章。(https://creativecommons.org/licenses/by-sa/4.0/)
{"title":"Classification of Emg Signals Using Convolution Neural Network","authors":"Kaan Bakircioğlu, Nalan Özkurt","doi":"10.18100/ijamec.795227","DOIUrl":"https://doi.org/10.18100/ijamec.795227","url":null,"abstract":"An electrical signal is produced by the contraction of the muscles; this electrical signal contains information about the muscles, the recording of these signals called electromyography (EMG). This information is often used in studies such as prosthetic arm, muscle damage detection, and motion detection. Classifiers such as artificial neural networks, support vector machines are generally used for the classification of EMG signals. Despite successful results with such methods the extraction of the features to be given to the classifiers and the selection of the features affect the classification success. In this study, it is aimed to increase the success of the classification of the daily used hand movements using the Convolutional neural networks (CNN). The advantage of the deep learning techniques like CNN is that the relationships in big data are learned by the network. Firstly, the received EMG signals for forearms are windowed to increase the number of data and focus on the contraction points. Then, to compare the success rate, raw signals, Fourier transform of the signal, the root means square, and the Empirical mode decomposition (EMD) is applied to the signal and intrinsic mode functions are obtained. These signals are given to four different CNN. Afterward, to find the most efficient parameters, the results were obtained by splitting data set into three as 70% training set, 15% validation set, and 15% test set. 5 cross-validations have been applied to assess the system’s performance. The best results are obtained from the CNN, which receive the EMD applied signal as input. The result obtained with the cross-validation is 95.90% and the result obtained with the other separation method is 93.70%. When the results were examined, it was seen that CNN is a promising classifier even the raw signal is applied to the classifier. Also, it has been observed that EMD method creates better accuracy of classification. This is an open access article under the CC BY-SA 4.0 license. (https://creativecommons.org/licenses/by-sa/4.0/)","PeriodicalId":120305,"journal":{"name":"International Journal of Applied Mathematics Electronics and Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130829316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Control and Monitor of IoT Devices using EOG and Voice Commands 使用EOG和语音命令控制和监控物联网设备
Pub Date : 2020-10-01 DOI: 10.18100/IJAMEC.799507
Ayman A. Wazwaz, Mohammad Ziada, Lubna Awawdeh, M. Tahboub
This paper aims to deploy a machine to control and monitor home devices, and to assist people who suffer from spinal cord injuries to control devices, such injuries cause people to lose their ability to use their body movements, normal people may use voice commands as well. The prototype used electrooculography (EOG) system [1, 2, 3]. The patients who suffer from spinal cord injuries may use this system to control household appliances and use the voice system to control home devices. This prototype use internet of things (IoT) technology through Wi-Fi and Arduino microcontroller to capture eye muscle movement signals, that are taken from patients, or voice signals to compare them with pre-recorded voice commands. Many tests have been made to assure correctness and speed using different environment parameters and conditions. The error rate was 2.5% for EOG and 1% for voice commands in the best cases. The idea could be developed further, smartphones and mobile data can be used for controlling and monitoring homes remotely.
这篇论文的目的是部署一台机器来控制和监控家用设备,并帮助患有脊髓损伤的人控制设备,这种损伤导致人们失去使用身体运动的能力,正常人也可以使用语音命令。原型采用眼电成像(EOG)系统[1,2,3]。患有脊髓损伤的患者可以使用该系统控制家用电器,并使用语音系统控制家用设备。该原型机通过Wi-Fi和Arduino微控制器使用物联网(IoT)技术捕获患者的眼球肌肉运动信号或语音信号,并将其与预先录制的语音命令进行比较。在不同的环境参数和条件下进行了许多测试,以确保准确性和速度。在最好的情况下,EOG的错误率为2.5%,语音命令的错误率为1%。这个想法可以进一步发展,智能手机和移动数据可以用于远程控制和监控家庭。
{"title":"Control and Monitor of IoT Devices using EOG and Voice Commands","authors":"Ayman A. Wazwaz, Mohammad Ziada, Lubna Awawdeh, M. Tahboub","doi":"10.18100/IJAMEC.799507","DOIUrl":"https://doi.org/10.18100/IJAMEC.799507","url":null,"abstract":"This paper aims to deploy a machine to control and monitor home devices, and to assist people who suffer from spinal cord injuries to control devices, such injuries cause people to lose their ability to use their body movements, normal people may use voice commands as well. The prototype used electrooculography (EOG) system [1, 2, 3]. The patients who suffer from spinal cord injuries may use this system to control household appliances and use the voice system to control home devices. This prototype use internet of things (IoT) technology through Wi-Fi and Arduino microcontroller to capture eye muscle movement signals, that are taken from patients, or voice signals to compare them with pre-recorded voice commands. Many tests have been made to assure correctness and speed using different environment parameters and conditions. The error rate was 2.5% for EOG and 1% for voice commands in the best cases. The idea could be developed further, smartphones and mobile data can be used for controlling and monitoring homes remotely.","PeriodicalId":120305,"journal":{"name":"International Journal of Applied Mathematics Electronics and Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120808464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Discovering the same job ads expressed with the different sentences by using hybrid clustering algorithms 利用混合聚类算法发现用不同句子表达的相同招聘广告
Pub Date : 2020-09-30 DOI: 10.18100/IJAMEC.797572
Y. Dogan, Feriştah Dalkılıç, R. A. Kut, K. C. Kara, Uygar Takazoğlu
Text mining studies on job ads have become widespread in recent years to determine the qualifications required for each position. It can be said that the researches made for Turkish are limited while a large resource pool is encountered for the English language. Kariyer.Net is the biggest company for the job ads in Turkey and 99% of the ads are Turkish. Therefore, there is a necessity to develop novel Natural Language Processing (NLP) models in Turkish for analysis of this big database. In this study, the job ads of Kariyer.Net have been analyzed, and by using a hybrid clustering algorithm, the hidden associations in this dataset as the big data have been discovered. Firstly, all ads in the form of HTML codes have been transformed into regular sentences by the means of extracting HTML codes to inner texts. Then, these inner texts containing the core ads have been converted into the sub ads by traditional methods. After these NLP steps, hybrid clustering algorithms have been used and the same ads expressed with the different sentences could be managed to be detected. For the analysis, 57 positions about Information Technology sectors with 6,897 ad texts have been focused on. As a result, it can be claimed that the clusters obtained contain useful outcomes and the model proposed can be used to discover common and unique ads for each position.
近年来,对招聘广告进行文本挖掘研究,以确定每个职位所需的资格要求,这种研究已经变得非常普遍。可以说,对土耳其语的研究是有限的,而对英语语言的研究则遇到了很大的资源池。Kariyer。Net是土耳其最大的招聘广告公司,99%的广告都是土耳其语的。因此,有必要开发新的土耳其语自然语言处理(NLP)模型来分析这个庞大的数据库。在本研究中,卡里耶的招聘广告。并利用混合聚类算法,发现了该数据集作为大数据所隐藏的关联。首先,将所有HTML代码形式的广告通过提取HTML代码到内部文本的方式转化为规则的句子。然后,将这些包含核心广告的内部文本通过传统方法转换为子广告。在这些NLP步骤之后,使用混合聚类算法,可以设法检测到用不同句子表达的相同广告。该分析集中了信息技术(it)领域的57个职位和6897个广告文本。因此,可以声称获得的聚类包含有用的结果,并且所提出的模型可用于发现每个职位的常见和唯一广告。
{"title":"Discovering the same job ads expressed with the different sentences by using hybrid clustering algorithms","authors":"Y. Dogan, Feriştah Dalkılıç, R. A. Kut, K. C. Kara, Uygar Takazoğlu","doi":"10.18100/IJAMEC.797572","DOIUrl":"https://doi.org/10.18100/IJAMEC.797572","url":null,"abstract":"Text mining studies on job ads have become widespread in recent years to determine the qualifications required for each position. It can be said that the researches made for Turkish are limited while a large resource pool is encountered for the English language. Kariyer.Net is the biggest company for the job ads in Turkey and 99% of the ads are Turkish. Therefore, there is a necessity to develop novel Natural Language Processing (NLP) models in Turkish for analysis of this big database. In this study, the job ads of Kariyer.Net have been analyzed, and by using a hybrid clustering algorithm, the hidden associations in this dataset as the big data have been discovered. Firstly, all ads in the form of HTML codes have been transformed into regular sentences by the means of extracting HTML codes to inner texts. Then, these inner texts containing the core ads have been converted into the sub ads by traditional methods. After these NLP steps, hybrid clustering algorithms have been used and the same ads expressed with the different sentences could be managed to be detected. For the analysis, 57 positions about Information Technology sectors with 6,897 ad texts have been focused on. As a result, it can be claimed that the clusters obtained contain useful outcomes and the model proposed can be used to discover common and unique ads for each position.","PeriodicalId":120305,"journal":{"name":"International Journal of Applied Mathematics Electronics and Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126129294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Detection and differentiation of COVID-19 using deep learning approach fed by x-rays 利用x射线提供的深度学习方法检测和区分COVID-19
Pub Date : 2020-09-30 DOI: 10.18100/IJAMEC.799651
Ç. Erdaş, Didem Ölçer
The coronavirus, which appeared in China in late 2019, spread over the world and became an epidemic. Although the mortality rate is not very high, it has hampered the lives of people around the world due to the high rate of spread. Moreover, compared to other individuals in the society, the mortality rate in elderly individuals and people with chronic disease is high. The early detection of infected individuals is one of the most effective ways to both fight disease and slow the outbreak. In this study, a deep learning approach, which is alternative and supportive of traditional diagnostic tools and fed with chest x-rays, has been developed. The purpose of this deep learning approach, which has the convolutional neural networks (CNNs) architecture, is (1) to diagnose pneumonia caused by a coronavirus, (2) to find out if a patient with symptoms of pneumonia on chest X-ray is caused by bacteria or coronavirus. For this purpose, a new database has been brought together from various publicly available sources. This dataset includes 50 chest X-rays from people diagnosed with pneumonia caused by a coronavirus, 50 chest X-rays from healthy individuals belonging to the control group, and 50 chest X-rays from people diagnosed with bacterium from pneumonia. Our approach succeeded in terms of accuracy of 92% for corona virus-based pneumonia diagnosis tasks (1) and 81% for the task of finding the origin of pneumonia (2). Besides, achievements for Area Under the ROC Curve (ROC_AUC), Precision, Recall, F1-score, Specificity, and Negative Predictive Value (NPV) metrics are specified in this paper.
2019年底在中国出现的冠状病毒蔓延到世界各地,成为一种流行病。虽然死亡率不是很高,但由于高传播率,它阻碍了世界各地人们的生活。此外,与社会上的其他个体相比,老年人和慢性病患者的死亡率很高。早期发现感染者是对抗疾病和减缓疫情爆发的最有效方法之一。在这项研究中,已经开发了一种深度学习方法,它可以替代和支持传统的诊断工具,并辅以胸部x光片。这种具有卷积神经网络(cnn)架构的深度学习方法的目的是:(1)诊断由冠状病毒引起的肺炎,(2)找出胸片上出现肺炎症状的患者是由细菌还是冠状病毒引起的。为此目的,从各种公开来源汇集了一个新的数据库。该数据集包括被诊断为冠状病毒引起的肺炎的人的50张胸部x光片,属于对照组的健康个体的50张胸部x光片,以及被诊断为肺炎细菌的人的50张胸部x光片。我们的方法在基于冠状病毒的肺炎诊断任务(1)的准确率为92%,在寻找肺炎起源任务(2)的准确率为81%。此外,本文还详细说明了ROC曲线下面积(ROC_AUC)、精度、召回率、f1评分、特异性和阴性预测值(NPV)指标的成就。
{"title":"Detection and differentiation of COVID-19 using deep learning approach fed by x-rays","authors":"Ç. Erdaş, Didem Ölçer","doi":"10.18100/IJAMEC.799651","DOIUrl":"https://doi.org/10.18100/IJAMEC.799651","url":null,"abstract":"The coronavirus, which appeared in China in late 2019, spread over the world and became an epidemic. Although the mortality rate is not very high, it has hampered the lives of people around the world due to the high rate of spread. Moreover, compared to other individuals in the society, the mortality rate in elderly individuals and people with chronic disease is high. The early detection of infected individuals is one of the most effective ways to both fight disease and slow the outbreak. In this study, a deep learning approach, which is alternative and supportive of traditional diagnostic tools and fed with chest x-rays, has been developed. The purpose of this deep learning approach, which has the convolutional neural networks (CNNs) architecture, is (1) to diagnose pneumonia caused by a coronavirus, (2) to find out if a patient with symptoms of pneumonia on chest X-ray is caused by bacteria or coronavirus. For this purpose, a new database has been brought together from various publicly available sources. This dataset includes 50 chest X-rays from people diagnosed with pneumonia caused by a coronavirus, 50 chest X-rays from healthy individuals belonging to the control group, and 50 chest X-rays from people diagnosed with bacterium from pneumonia. Our approach succeeded in terms of accuracy of 92% for corona virus-based pneumonia diagnosis tasks (1) and 81% for the task of finding the origin of pneumonia (2). Besides, achievements for Area Under the ROC Curve (ROC_AUC), Precision, Recall, F1-score, Specificity, and Negative Predictive Value (NPV) metrics are specified in this paper.","PeriodicalId":120305,"journal":{"name":"International Journal of Applied Mathematics Electronics and Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127728571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Improved Global Localization and Resampling Techniques for Monte Carlo Localization Algorithm 蒙特卡罗定位算法的改进全局定位和重采样技术
Pub Date : 2020-09-30 DOI: 10.18100/IJAMEC.800166
Humam Abualkebash, H. Ocak
Global indoor localization algorithms enable the robot to estimate its pose in pre-mapped environments using sensor measurements when its initial pose is unknown. The conventional Adaptive Monte Carlo Localization (AMCL) is a highly efficient localization algorithm that can successfully cope with global uncertainty. Since the global localization problem is paramount in mobile robots, we propose a novel approach that can significantly reduce the amount of time it takes for the algorithm to converge to true pose. Given the map and initial scan data, the proposed algorithm detects regions with high likelihood based on the observation model. As a result, the suggested sample distribution will expedite the process of localization. In this study, we also present an effective resampling strategy to deal with the kidnapped robot problem that enables the robot to recover quickly when the sample weights drop-down due to unmapped dynamic obstacles within the sensor’s field of view. The proposed approach distributes the random samples within a circular region centered around the robot’s pose by taking into account the prior knowledge about the most recent successful pose estimation. Since the samples are distributed over the region with high probabilities, it will take less time for the samples to converge to the actual pose. The percentage of improvement for the small sample set (500 samples) exceeded 90% over the large maps and played a big role in reducing computational resources. In general, the results demonstrate the localization efficacy of the proposed scheme, even with small sample sets. Consequently, the proposed scheme significantly increases the real-time performance of the algorithm by 85.12% on average in terms of decreasing the computational cost.
全局室内定位算法使机器人能够在初始姿态未知的情况下,利用传感器测量来估计其在预映射环境中的姿态。传统的自适应蒙特卡罗定位(AMCL)是一种能够成功处理全局不确定性的高效定位算法。由于移动机器人的全局定位问题是至关重要的,我们提出了一种新的方法,可以显著减少算法收敛到真实姿态所需的时间。在给定地图和初始扫描数据的情况下,基于观测模型检测高似然区域。因此,建议的样品分发将加快本地化进程。在本研究中,我们还提出了一种有效的重采样策略来处理绑架机器人问题,该策略使机器人在由于传感器视野内未映射的动态障碍物而导致样本权重下降时能够快速恢复。该方法通过考虑最近成功姿态估计的先验知识,将随机样本分布在以机器人姿态为中心的圆形区域内。由于样本分布在具有高概率的区域上,因此样本收敛到实际姿态所需的时间更短。小样本集(500个样本)的改进百分比超过了大地图的90%,在减少计算资源方面发挥了重要作用。总的来说,即使在小样本集的情况下,结果也证明了该方法的定位效果。因此,在降低计算成本方面,该方案将算法的实时性平均提高了85.12%。
{"title":"Improved Global Localization and Resampling Techniques for Monte Carlo Localization Algorithm","authors":"Humam Abualkebash, H. Ocak","doi":"10.18100/IJAMEC.800166","DOIUrl":"https://doi.org/10.18100/IJAMEC.800166","url":null,"abstract":"Global indoor localization algorithms enable the robot to estimate its pose in pre-mapped environments using sensor measurements when its initial pose is unknown. The conventional Adaptive Monte Carlo Localization (AMCL) is a highly efficient localization algorithm that can successfully cope with global uncertainty. Since the global localization problem is paramount in mobile robots, we propose a novel approach that can significantly reduce the amount of time it takes for the algorithm to converge to true pose. Given the map and initial scan data, the proposed algorithm detects regions with high likelihood based on the observation model. As a result, the suggested sample distribution will expedite the process of localization. In this study, we also present an effective resampling strategy to deal with the kidnapped robot problem that enables the robot to recover quickly when the sample weights drop-down due to unmapped dynamic obstacles within the sensor’s field of view. The proposed approach distributes the random samples within a circular region centered around the robot’s pose by taking into account the prior knowledge about the most recent successful pose estimation. Since the samples are distributed over the region with high probabilities, it will take less time for the samples to converge to the actual pose. The percentage of improvement for the small sample set (500 samples) exceeded 90% over the large maps and played a big role in reducing computational resources. In general, the results demonstrate the localization efficacy of the proposed scheme, even with small sample sets. Consequently, the proposed scheme significantly increases the real-time performance of the algorithm by 85.12% on average in terms of decreasing the computational cost.","PeriodicalId":120305,"journal":{"name":"International Journal of Applied Mathematics Electronics and Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114560115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Framingham Risk Score by Data Mining Method 基于数据挖掘方法的Framingham风险评分
Pub Date : 2020-09-30 DOI: 10.18100/IJAMEC.795224
Ş. Kitiş
There are cleaning, integration, reduction, conversion, algorithm implementation and evaluation stages in data mining meaning finding necessary data from a wide variety of variables and data. It is important to create a data warehouse to realize these steps. Data randomly selected from data warehouse is evaluated with certain algorithms. While deaths resulting from heart diseases in our country are 37% according to 2016 data, 420-440 thousand people are diagnosed as heart patients each year and the number of deaths per year can reach 340 thousand people. These values correspond to approximately three times of Europe. In this study, risk of heart attack is calculated by data mining method by taking advantage of Framingham risk score. In order to determine this risk factor; 10-year risk is calculated by looking at sex, age, total cholesterol, HDL cholesterol, blood pressure, diabetes and smoking. While the effects of the ages for men starts -9 points, ends with +13 points and for women starts -7 points, ends with +16 points. While the effects of the total cholesterol for men starts 0 points, ends with +11 points and for women starts 0 points, ends with +13 points. Total scores are between 0-17 and over in men, and scores between 0-25 and over in women. There are risk values ranging from 1% to 30%.
在数据挖掘中,有清理、整合、约简、转换、算法实现和评估阶段,这意味着从各种变量和数据中找到必要的数据。创建一个数据仓库来实现这些步骤非常重要。从数据仓库中随机抽取数据,用一定的算法对数据进行评估。根据2016年的数据,我国因心脏病导致的死亡率为37%,每年有42 -44万人被诊断为心脏病患者,每年死亡人数可达34万人。这些数值大约相当于欧洲的三倍。本研究采用数据挖掘方法,利用Framingham风险评分法计算心脏病发作风险。为了确定这个风险因素;10年的风险是通过观察性别、年龄、总胆固醇、高密度脂蛋白胆固醇、血压、糖尿病和吸烟来计算的。年龄对男性的影响从-9分开始,到+13分结束;对女性的影响从-7分开始,到+16分结束。总胆固醇对男性的影响从0分开始,到+11分结束,对女性的影响从0分开始,到+13分结束。男性总分在0-17分以上,女性总分在0-25分以上。风险值从1%到30%不等。
{"title":"Framingham Risk Score by Data Mining Method","authors":"Ş. Kitiş","doi":"10.18100/IJAMEC.795224","DOIUrl":"https://doi.org/10.18100/IJAMEC.795224","url":null,"abstract":"There are cleaning, integration, reduction, conversion, algorithm implementation and evaluation stages in data mining meaning finding necessary data from a wide variety of variables and data. It is important to create a data warehouse to realize these steps. Data randomly selected from data warehouse is evaluated with certain algorithms. While deaths resulting from heart diseases in our country are 37% according to 2016 data, 420-440 thousand people are diagnosed as heart patients each year and the number of deaths per year can reach 340 thousand people. These values correspond to approximately three times of Europe. In this study, risk of heart attack is calculated by data mining method by taking advantage of Framingham risk score. In order to determine this risk factor; 10-year risk is calculated by looking at sex, age, total cholesterol, HDL cholesterol, blood pressure, diabetes and smoking. While the effects of the ages for men starts -9 points, ends with +13 points and for women starts -7 points, ends with +16 points. While the effects of the total cholesterol for men starts 0 points, ends with +11 points and for women starts 0 points, ends with +13 points. Total scores are between 0-17 and over in men, and scores between 0-25 and over in women. There are risk values ranging from 1% to 30%.","PeriodicalId":120305,"journal":{"name":"International Journal of Applied Mathematics Electronics and Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125105356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vehicle Detection Using Fuzzy C-Means Clustering Algorithm 基于模糊c均值聚类算法的车辆检测
Pub Date : 2020-09-30 DOI: 10.18100/ijamec.799431
Ridvan Saraçoglu, N. Nemati
Vehicle detection and identification are very important functions in the field of traffic control and management. Generally, a study should be conducted on big data sets and area characteristics to get closer to this function. The aim is to find the most appropriate model for these data. Also, the model that is prepared for the data aims to recognize the factors on the image. In other words, it aims to assign factors to the right classes and differentiate them. A classification of the image is made in that way. In this study, a vehicle identification system, in which Fuzzy C-Means Algorithm is used for image segmentation and the Support Vector Machine is used for image classification, is presented. The currentness of these methods is their most important property. The obtained results show that the selected methods are applied successfully and effectively.
车辆检测与识别是交通控制与管理领域中非常重要的功能。一般来说,为了更接近这个功能,需要对大数据集和区域特征进行研究。目的是为这些数据找到最合适的模型。此外,为数据准备的模型旨在识别图像上的因素。换句话说,它旨在将因子分配到正确的类别并区分它们。用这种方法对图像进行分类。本文提出了一种采用模糊c均值算法进行图像分割、支持向量机进行图像分类的车辆识别系统。这些方法的时效性是它们最重要的特性。结果表明,所选方法的应用是成功有效的。
{"title":"Vehicle Detection Using Fuzzy C-Means Clustering Algorithm","authors":"Ridvan Saraçoglu, N. Nemati","doi":"10.18100/ijamec.799431","DOIUrl":"https://doi.org/10.18100/ijamec.799431","url":null,"abstract":"Vehicle detection and identification are very important functions in the field of traffic control and management. Generally, a study should be conducted on big data sets and area characteristics to get closer to this function. The aim is to find the most appropriate model for these data. Also, the model that is prepared for the data aims to recognize the factors on the image. In other words, it aims to assign factors to the right classes and differentiate them. A classification of the image is made in that way. In this study, a vehicle identification system, in which Fuzzy C-Means Algorithm is used for image segmentation and the Support Vector Machine is used for image classification, is presented. The currentness of these methods is their most important property. The obtained results show that the selected methods are applied successfully and effectively.","PeriodicalId":120305,"journal":{"name":"International Journal of Applied Mathematics Electronics and Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126698675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Evaluating the Bank Queuing Systems by Fuzzy Logic 用模糊逻辑评价银行排队系统
Pub Date : 2020-09-29 DOI: 10.18100/IJAMEC.797742
Halil Kilif, I. Ozkan
Various models are used in the banking system to organize the queue structure of customers' banking transactions. The average waiting time for a customer in the queue generally varies depending on whether bank customer or not and the customer score it has. Different uncertain parameters are used to determine the individual queue group and average waiting time in bank queuing systems. This paper proposes a fuzzy logic-based approach in bank queuing systems. In this study, individual bank queue group and average waiting times are determined according to the number of waiting customers, customer score and credit score parameters. In addition, identification number is a determining factor for the priority of transactions in bank queuing systems. People who are not customers of the bank often have longer waiting times. As a new approach to the working structure of bank queuing systems, this study also suggests that non-bank customers should be given priority sequence numbers according to their credit scores.
银行系统使用各种模型来组织客户银行交易的队列结构。排队中客户的平均等待时间通常取决于是否为银行客户及其客户得分。在银行排队系统中,使用不同的不确定参数来确定单个队列组和平均等待时间。本文提出了一种基于模糊逻辑的银行排队系统求解方法。在本研究中,根据等待客户数量、客户评分和信用评分参数确定各银行的排队群体和平均等待时间。此外,在银行排队系统中,身份证号是决定交易优先级的一个因素。不是银行客户的人通常需要更长的等待时间。作为银行排队系统工作结构的新方法,本研究还建议非银行客户应根据其信用评分给予优先序列号。
{"title":"Evaluating the Bank Queuing Systems by Fuzzy Logic","authors":"Halil Kilif, I. Ozkan","doi":"10.18100/IJAMEC.797742","DOIUrl":"https://doi.org/10.18100/IJAMEC.797742","url":null,"abstract":"Various models are used in the banking system to organize the queue structure of customers' banking transactions. The average waiting time for a customer in the queue generally varies depending on whether bank customer or not and the customer score it has. Different uncertain parameters are used to determine the individual queue group and average waiting time in bank queuing systems. This paper proposes a fuzzy logic-based approach in bank queuing systems. In this study, individual bank queue group and average waiting times are determined according to the number of waiting customers, customer score and credit score parameters. In addition, identification number is a determining factor for the priority of transactions in bank queuing systems. People who are not customers of the bank often have longer waiting times. As a new approach to the working structure of bank queuing systems, this study also suggests that non-bank customers should be given priority sequence numbers according to their credit scores.","PeriodicalId":120305,"journal":{"name":"International Journal of Applied Mathematics Electronics and Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130739989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An ANFIS based inverse modeling for pneumatic artificial muscles 基于ANFIS的气动人工肌肉逆建模
Pub Date : 2020-09-28 DOI: 10.18100/IJAMEC.797271
C. V. Baysal
Pneumatic Artificial Muscles (PAM) are soft actuators with advantages of high force to weight ratio, flexible structure and low cost. On the other hand, their inherent nonlinear characteristics yield difficulties in modeling and control actions, which is an important factor restricting use of PAM. In literature, there are various modeling approaches such as virtual work , empirical and phenomenological models. However, they appear as either much complicated or are approximate ones as a variable stiffness spring for model with nonlinear input-output relationship. In this work, the behaviour of PAM is interpreted as an integrated response to pressure input that results in a simultaneous force and muscle length change. The integrated response behaviour of PAM is not combined effectively in terms of simultaneous resultant force and muscle contraction in many existing models. In order to implement that response, standard identification methods , for instance NNARX, are not suitable for modeling this behaviour. Moreover, an inverse modeling with grey box approach is proposed in order to utilize the model in control applications. Since Neuro-Fuzzy inference systems are universal estimators, the modeling is implemented by an ANFIS structure using the experimental data collected from PAM test bed. According to implementation results, the ANFIS based inverse model has yielded satisfactory performance deducing that it could be a simple and effective solution for PAM modeling and control issue.
气动人工肌肉(PAM)是一种具有力重比高、结构灵活、成本低等优点的柔性执行器。另一方面,它们固有的非线性特性给建模和控制行为带来了困难,这是制约PAM应用的重要因素。在文献中,有各种建模方法,如虚拟功,经验和现象学模型。但对于具有非线性输入输出关系的模型,它们要么过于复杂,要么近似为变刚度弹簧。在这项工作中,PAM的行为被解释为对压力输入的综合响应,导致同时的力和肌肉长度变化。在现有的许多模型中,PAM的综合响应行为在同时合力和肌肉收缩方面没有得到有效的结合。为了实现这种响应,标准的识别方法,例如NNARX,不适合对这种行为进行建模。在此基础上,提出了一种基于灰盒法的逆建模方法,以便将该模型应用于控制中。由于神经模糊推理系统是通用估计器,因此利用PAM试验台收集的实验数据,通过ANFIS结构实现建模。实现结果表明,基于ANFIS的反模型取得了满意的效果,是解决PAM建模与控制问题的一种简单有效的方法。
{"title":"An ANFIS based inverse modeling for pneumatic artificial muscles","authors":"C. V. Baysal","doi":"10.18100/IJAMEC.797271","DOIUrl":"https://doi.org/10.18100/IJAMEC.797271","url":null,"abstract":"Pneumatic Artificial Muscles (PAM) are soft actuators with advantages of high force to weight ratio, flexible structure and low cost. On the other hand, their inherent nonlinear characteristics yield difficulties in modeling and control actions, which is an important factor restricting use of PAM. In literature, there are various modeling approaches such as virtual work , empirical and phenomenological models. However, they appear as either much complicated or are approximate ones as a variable stiffness spring for model with nonlinear input-output relationship. In this work, the behaviour of PAM is interpreted as an integrated response to pressure input that results in a simultaneous force and muscle length change. The integrated response behaviour of PAM is not combined effectively in terms of simultaneous resultant force and muscle contraction in many existing models. In order to implement that response, standard identification methods , for instance NNARX, are not suitable for modeling this behaviour. Moreover, an inverse modeling with grey box approach is proposed in order to utilize the model in control applications. Since Neuro-Fuzzy inference systems are universal estimators, the modeling is implemented by an ANFIS structure using the experimental data collected from PAM test bed. According to implementation results, the ANFIS based inverse model has yielded satisfactory performance deducing that it could be a simple and effective solution for PAM modeling and control issue.","PeriodicalId":120305,"journal":{"name":"International Journal of Applied Mathematics Electronics and Computers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133877536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Applied Mathematics Electronics and Computers
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1