首页 > 最新文献

Advances in computational intelligence最新文献

英文 中文
An unsupervised autonomous learning framework for goal-directed behaviours in dynamic contexts 动态环境下目标导向行为的无监督自主学习框架
Pub Date : 2022-06-02 DOI: 10.1007/s43674-022-00037-9
Chinedu Pascal Ezenkwu, Andrew Starkey

Due to their dependence on a task-specific reward function, reinforcement learning agents are ineffective at responding to a dynamic goal or environment. This paper seeks to overcome this limitation of traditional reinforcement learning through a task-agnostic, self-organising autonomous agent framework. The proposed algorithm is a hybrid of TMGWR for self-adaptive learning of sensorimotor maps and value iteration for goal-directed planning. TMGWR has been previously demonstrated to overcome the problems associated with competing sensorimotor techniques such SOM, GNG, and GWR; these problems include: difficulty in setting a suitable number of neurons for a task, inflexibility, the inability to cope with non-markovian environments, challenges with noise, and inappropriate representation of sensory observations and actions together. However, the binary sensorimotor-link implementation in the original TMGWR enables catastrophic forgetting when the agent experiences changes in the task and it is therefore not suitable for self-adaptive learning. A new sensorimotor-link update rule is presented in this paper to enable the adaptation of the sensorimotor map to new experiences. This paper has demonstrated that the TMGWR-based algorithm has better sample efficiency than model-free reinforcement learning and better self-adaptivity than both the model-free and the traditional model-based reinforcement learning algorithms. Moreover, the algorithm has been demonstrated to give the lowest overall computational cost when compared to traditional reinforcement learning algorithms.

由于它们依赖于特定任务的奖励函数,强化学习主体在响应动态目标或环境方面是无效的。本文试图通过任务不可知、自组织的自主主体框架来克服传统强化学习的局限性。所提出的算法是用于感知运动图自适应学习的TMGWR和用于目标导向规划的值迭代的混合。TMGWR先前已被证明可以克服与竞争性感觉运动技术(如SOM、GNG和GWR)相关的问题;这些问题包括:难以为一项任务设置合适数量的神经元、灵活性、无法应对非马尔可夫环境、噪音挑战以及不恰当地将感官观察和动作表现在一起。然而,当主体在任务中经历变化时,原始TMGWR中的二元感觉运动链接实现会导致灾难性遗忘,因此不适合自适应学习。本文提出了一种新的感觉运动链接更新规则,以使感觉运动图能够适应新的体验。本文证明了基于TMGWR的算法比无模型强化学习具有更好的样本效率,并且比无模型和传统的基于模型的强化学习算法都具有更好的自适应性。此外,与传统的强化学习算法相比,该算法的总体计算成本最低。
{"title":"An unsupervised autonomous learning framework for goal-directed behaviours in dynamic contexts","authors":"Chinedu Pascal Ezenkwu,&nbsp;Andrew Starkey","doi":"10.1007/s43674-022-00037-9","DOIUrl":"10.1007/s43674-022-00037-9","url":null,"abstract":"<div><p>Due to their dependence on a task-specific reward function, reinforcement learning agents are ineffective at responding to a dynamic goal or environment. This paper seeks to overcome this limitation of traditional reinforcement learning through a task-agnostic, self-organising autonomous agent framework. The proposed algorithm is a hybrid of TMGWR for self-adaptive learning of sensorimotor maps and value iteration for goal-directed planning. TMGWR has been previously demonstrated to overcome the problems associated with competing sensorimotor techniques such SOM, GNG, and GWR; these problems include: difficulty in setting a suitable number of neurons for a task, inflexibility, the inability to cope with non-markovian environments, challenges with noise, and inappropriate representation of sensory observations and actions together. However, the binary sensorimotor-link implementation in the original TMGWR enables catastrophic forgetting when the agent experiences changes in the task and it is therefore not suitable for self-adaptive learning. A new sensorimotor-link update rule is presented in this paper to enable the adaptation of the sensorimotor map to new experiences. This paper has demonstrated that the TMGWR-based algorithm has better sample efficiency than model-free reinforcement learning and better self-adaptivity than both the model-free and the traditional model-based reinforcement learning algorithms. Moreover, the algorithm has been demonstrated to give the lowest overall computational cost when compared to traditional reinforcement learning algorithms.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-022-00037-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50442527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning cutting forces in milling processes of functionally graded materials 功能梯度材料铣削过程中的机器学习切削力
Pub Date : 2022-05-27 DOI: 10.1007/s43674-022-00036-w
Xiaojie Xu, Yun Zhang, Yunlu Li, Yunyao Li

Machine learning approaches can serve as powerful tools in the machining optimization process. Criteria, such as accuracy and stability, are important to consider when choosing among different models. For the industrial application, it also is essential to balance cost, applicabilities, and ease of implementations. Here, we develop Gaussian process regression models for predicting the main cutting force (R) and its components in three directions of the coordinate system ((F_{x}), (F_{y}), and (F_{z})) based on two predictors: the depth of cut ((a_{p})) and the feed rate (f) in milling processes of functionally graded materials. The model performance shows high accuracy and stability, and the models are thus promising for estimating the cutting force and its component in a fast, cost effective, and robust fashion.

机器学习方法可以作为加工优化过程中的强大工具。在选择不同的模型时,精度和稳定性等标准非常重要。对于工业应用,平衡成本、适用性和易于实现也是至关重要的。在此,我们基于两个预测因子:功能梯度材料铣削过程中的切削深度(a_{p})和进给速率(F),建立了高斯过程回归模型,用于预测坐标系((F_{x})、(F_{y}和(F_{z}))三个方向上的主切削力(R)及其分量。模型性能显示出高精度和稳定性,因此该模型有望以快速、经济高效和稳健的方式估计切削力及其分量。
{"title":"Machine learning cutting forces in milling processes of functionally graded materials","authors":"Xiaojie Xu,&nbsp;Yun Zhang,&nbsp;Yunlu Li,&nbsp;Yunyao Li","doi":"10.1007/s43674-022-00036-w","DOIUrl":"10.1007/s43674-022-00036-w","url":null,"abstract":"<div><p>Machine learning approaches can serve as powerful tools in the machining optimization process. Criteria, such as accuracy and stability, are important to consider when choosing among different models. For the industrial application, it also is essential to balance cost, applicabilities, and ease of implementations. Here, we develop Gaussian process regression models for predicting the main cutting force (<i>R</i>) and its components in three directions of the coordinate system (<span>(F_{x})</span>, <span>(F_{y})</span>, and <span>(F_{z})</span>) based on two predictors: the depth of cut (<span>(a_{p})</span>) and the feed rate (<i>f</i>) in milling processes of functionally graded materials. The model performance shows high accuracy and stability, and the models are thus promising for estimating the cutting force and its component in a fast, cost effective, and robust fashion.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50518489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Comparative analysis of super-resolution reconstructed images for micro-expression recognition 用于微表情识别的超分辨率重建图像的比较分析
Pub Date : 2022-05-14 DOI: 10.1007/s43674-022-00035-x
Pratikshya Sharma, Sonya Coleman, Pratheepan Yogarajah, Laurence Taggart, Pradeepa Samarasinghe

It is an established fact that the genuineness of facial micro-expression is an effective means for estimating concealed emotions (Li et al. in Micro-expression recognition under low-resolution cases. SciTePress, Science and Technology Publications, Setúbal, 2019). Conventionally, analysis of these expressions has been performed using high resolution images which are ideal cases. However, in a real-world scenario, capturing expressions with high resolution images may not always be possible particularly using low-cost surveillance cameras. Faces captured using such cameras are often very tiny and of poor resolution. Due to the loss of discriminative features these images may not be of much use particularly for identifying certain minute facial details. To make these images useful, enhancing the textural information becomes essential and super-resolution algorithms can be ideal to achieve this. In this work, we utilize algorithms based on deep learning and generative adversarial network for transforming low-resolution micro-expression images into super-resolution images and examine their fitness particularly for micro-expression recognition. The proposed approach is tested on simulated dataset obtained from two popular spontaneous micro-expression datasets namely CASME II and SMIC-VIS; the experimental results demonstrate that the method achieved favourable results with the best recognition performance recorded as 61.63%. The significance of this work is: first, it thoroughly investigates reconstruction performance of several deep learning super-resolution algorithms on simulated low-quality micro-expression images; second, it provides a comprehensive analysis of the results obtained employing these reconstructed images to determine their contribution in addressing image quality issues specifically for micro-expression recognition.

一个公认的事实是,面部微表情的真实性是估计隐藏情绪的有效手段(Li et al.在低分辨率情况下的微表情识别中。SciTePress,科学技术出版社,Setúbal,2019)。传统上,已经使用作为理想情况的高分辨率图像来执行这些表达式的分析。然而,在现实世界中,用高分辨率图像捕捉表情可能并不总是可能的,尤其是使用低成本的监控摄像头。使用这种相机拍摄的人脸通常非常小,分辨率也很低。由于辨别特征的丢失,这些图像可能没有多大用处,特别是对于识别某些微小的面部细节。为了使这些图像变得有用,增强纹理信息变得至关重要,超分辨率算法可能是实现这一点的理想方法。在这项工作中,我们利用基于深度学习和生成对抗性网络的算法将低分辨率微表情图像转换为超分辨率图像,并检查它们是否适合微表情识别。该方法在两个流行的自发微表达数据集CASME II和SMIC-VIS的模拟数据集上进行了测试;实验结果表明,该方法取得了良好的效果,最佳识别率为61.63%。本工作的意义在于:首先,深入研究了几种深度学习超分辨率算法在模拟低质量微表情图像上的重建性能;其次,它对使用这些重建图像获得的结果进行了全面的分析,以确定它们在解决专门用于微表情识别的图像质量问题方面的贡献。
{"title":"Comparative analysis of super-resolution reconstructed images for micro-expression recognition","authors":"Pratikshya Sharma,&nbsp;Sonya Coleman,&nbsp;Pratheepan Yogarajah,&nbsp;Laurence Taggart,&nbsp;Pradeepa Samarasinghe","doi":"10.1007/s43674-022-00035-x","DOIUrl":"10.1007/s43674-022-00035-x","url":null,"abstract":"<div><p>It is an established fact that the genuineness of facial micro-expression is an effective means for estimating concealed emotions (Li et al. in Micro-expression recognition under low-resolution cases. SciTePress, Science and Technology Publications, Setúbal, 2019). Conventionally, analysis of these expressions has been performed using high resolution images which are ideal cases. However, in a real-world scenario, capturing expressions with high resolution images may not always be possible particularly using low-cost surveillance cameras. Faces captured using such cameras are often very tiny and of poor resolution. Due to the loss of discriminative features these images may not be of much use particularly for identifying certain minute facial details. To make these images useful, enhancing the textural information becomes essential and super-resolution algorithms can be ideal to achieve this. In this work, we utilize algorithms based on deep learning and generative adversarial network for transforming low-resolution micro-expression images into super-resolution images and examine their fitness particularly for micro-expression recognition. The proposed approach is tested on simulated dataset obtained from two popular spontaneous micro-expression datasets namely CASME II and SMIC-VIS; the experimental results demonstrate that the method achieved favourable results with the best recognition performance recorded as 61.63%. The significance of this work is: first, it thoroughly investigates reconstruction performance of several deep learning super-resolution algorithms on simulated low-quality micro-expression images; second, it provides a comprehensive analysis of the results obtained employing these reconstructed images to determine their contribution in addressing image quality issues specifically for micro-expression recognition.\u0000</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-022-00035-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50482157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
How to generate data for acronym detection and expansion 如何生成首字母缩略词检测和扩展数据
Pub Date : 2022-04-13 DOI: 10.1007/s43674-021-00024-6
Sing Choi, Piyush Puranik, Binay Dahal, Kazem Taghva

Finding the definitions of acronyms in any given text has been an on going problem with multiple proposed solutions. In this paper, we use the bidirectional encoder representations from transformers question answer model provided by Google to find acronym definitions in a given text. Given an acronym and a passage containing the acronym, our model is expected to find the expansion of the acronym in the passage. Through our experiments, we show that this model can correctly predict 94% of acronym expansions assuming a Jaro–Winkler threshold distance of greater than 0.8. One of the main contributions of this paper is a systematic method to create datasets and use them to build a corpus for acronym expansion. Our approach for data generation can be used in many applications where there are no standard datasets.

在任何给定的文本中找到缩写词的定义一直是一个持续的问题,有多种建议的解决方案。在本文中,我们使用谷歌提供的transformers问答模型中的双向编码器表示来查找给定文本中的首字母缩略词定义。给定一个首字母缩写词和一段包含该首字母缩写的文章,我们的模型有望在文章中找到首字母缩写语的扩展。通过我们的实验,我们表明,假设Jaro–Winkler阈值距离大于0.8,该模型可以正确预测94%的首字母缩略词扩展。本文的主要贡献之一是一种系统的方法来创建数据集,并使用它们来构建首字母缩略词扩展的语料库。我们的数据生成方法可以用于许多没有标准数据集的应用程序。
{"title":"How to generate data for acronym detection and expansion","authors":"Sing Choi,&nbsp;Piyush Puranik,&nbsp;Binay Dahal,&nbsp;Kazem Taghva","doi":"10.1007/s43674-021-00024-6","DOIUrl":"10.1007/s43674-021-00024-6","url":null,"abstract":"<div><p>Finding the definitions of acronyms in any given text has been an on going problem with multiple proposed solutions. In this paper, we use the bidirectional encoder representations from transformers question answer model provided by Google to find acronym definitions in a given text. Given an acronym and a passage containing the acronym, our model is expected to find the expansion of the acronym in the passage. Through our experiments, we show that this model can correctly predict 94% of acronym expansions assuming a Jaro–Winkler threshold distance of greater than 0.8. One of the main contributions of this paper is a systematic method to create datasets and use them to build a corpus for acronym expansion. Our approach for data generation can be used in many applications where there are no standard datasets.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50475933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Machine learning for diabetes clinical decision support: a review 糖尿病临床决策支持的机器学习研究综述
Pub Date : 2022-04-13 DOI: 10.1007/s43674-022-00034-y
Ashwini Tuppad, Shantala Devi Patil

Type 2 diabetes has recently acquired the status of an epidemic silent killer, though it is non-communicable. There are two main reasons behind this perception of the disease. First, a gradual but exponential growth in the disease prevalence has been witnessed irrespective of age groups, geography or gender. Second, the disease dynamics are very complex in terms of multifactorial risks involved, initial asymptomatic period, different short-term and long-term complications posing serious health threat and related co-morbidities. Majority of its risk factors are lifestyle habits like physical inactivity, lack of exercise, high body mass index (BMI), poor diet, smoking except some inevitable ones like family history of diabetes, ethnic predisposition, ageing etc. Nowadays, machine learning (ML) is increasingly being applied for alleviation of diabetes health burden and many research works have been proposed in the literature to offer clinical decision support in different application areas as well. In this paper, we present a review of such efforts for the prevention and management of type 2 diabetes. Firstly, we present the medical gaps in diabetes knowledge base, guidelines and medical practice identified from relevant articles and highlight those that can be addressed by ML. Further, we review the ML research works in three different application areas namely—(1) risk assessment (statistical risk scores and ML-based risk models), (2) diagnosis (using non-invasive and invasive features), (3) prognosis (from normoglycemia/prior morbidity to incident diabetes and prognosis of incident diabetes to related complications). We discuss and summarize the shortcomings or gaps in the existing ML methodologies for diabetes to be addressed in future. This review provides the breadth of ML predictive modeling applications for diabetes while highlighting the medical and technological gaps as well as various aspects involved in ML-based diabetes clinical decision support.

2型糖尿病最近已成为流行病的无声杀手,尽管它是非传染性的。这种对这种疾病的看法背后有两个主要原因。首先,无论年龄组、地理位置或性别如何,疾病流行率都呈指数级增长。第二,疾病动态非常复杂,涉及多因素风险、初始无症状期、造成严重健康威胁的不同短期和长期并发症以及相关的并发症。它的大多数风险因素是生活习惯,如不运动、缺乏锻炼、高体重指数、不良饮食、吸烟,但一些不可避免的因素除外,如糖尿病家族史、种族倾向、衰老等。如今,机器学习(ML)越来越多地被应用于减轻糖尿病健康负担,文献中也提出了许多研究工作,以在不同的应用领域提供临床决策支持。在这篇论文中,我们对2型糖尿病的预防和管理进行了综述。首先,我们介绍了从相关文章中发现的糖尿病知识库、指南和医疗实践方面的医学空白,并强调了ML可以解决的问题。此外,我们回顾了ML在三个不同应用领域的研究工作,即:(1)风险评估(统计风险评分和基于ML的风险模型)、(2)诊断(使用非侵入性和侵入性特征),(3)预后(从血糖正常/既往发病率到偶发糖尿病以及偶发糖尿病到相关并发症的预后)。我们讨论并总结了现有糖尿病ML方法的不足或差距,以供未来解决。这篇综述提供了糖尿病ML预测建模应用的广度,同时强调了医学和技术差距以及基于ML的糖尿病临床决策支持所涉及的各个方面。
{"title":"Machine learning for diabetes clinical decision support: a review","authors":"Ashwini Tuppad,&nbsp;Shantala Devi Patil","doi":"10.1007/s43674-022-00034-y","DOIUrl":"10.1007/s43674-022-00034-y","url":null,"abstract":"<div><p>Type 2 diabetes has recently acquired the status of an epidemic silent killer, though it is non-communicable. There are two main reasons behind this perception of the disease. First, a gradual but exponential growth in the disease prevalence has been witnessed irrespective of age groups, geography or gender. Second, the disease dynamics are very complex in terms of multifactorial risks involved, initial asymptomatic period, different short-term and long-term complications posing serious health threat and related co-morbidities. Majority of its risk factors are lifestyle habits like physical inactivity, lack of exercise, high body mass index (BMI), poor diet, smoking except some inevitable ones like family history of diabetes, ethnic predisposition, ageing etc. Nowadays, machine learning (ML) is increasingly being applied for alleviation of diabetes health burden and many research works have been proposed in the literature to offer clinical decision support in different application areas as well. In this paper, we present a review of such efforts for the prevention and management of type 2 diabetes. Firstly, we present the medical gaps in diabetes knowledge base, guidelines and medical practice identified from relevant articles and highlight those that can be addressed by ML. Further, we review the ML research works in three different application areas namely—(1) risk assessment (statistical risk scores and ML-based risk models), (2) diagnosis (using non-invasive and invasive features), (3) prognosis (from normoglycemia/prior morbidity to incident diabetes and prognosis of incident diabetes to related complications). We discuss and summarize the shortcomings or gaps in the existing ML methodologies for diabetes to be addressed in future. This review provides the breadth of ML predictive modeling applications for diabetes while highlighting the medical and technological gaps as well as various aspects involved in ML-based diabetes clinical decision support.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50475932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Design of adaptive hybrid classification model using genetic-based linear adaptive skipping training (GLAST) algorithm for health-care dataset 基于遗传的线性自适应跳跃训练(GLAST)算法的医疗数据集自适应混合分类模型设计
Pub Date : 2022-03-23 DOI: 10.1007/s43674-021-00030-8
Manjula Devi Ramasamy, Keerthika Periasamy, Suresh Periasamy, Suresh Muthusamy, Hitesh Panchal, Pratik Arvindbhai Solanki, Kirti Panchal

Machine-learning techniques are being used in the health-care industry to improve care delivery at a lower cost and in less time. Artificial Neural Network (ANN) is well-known machine-learning techniques for its diagnostic applications, but it is also increasingly being utilized to guide health-care management decisions. At the same time, in the healthcare industry, ANN has made significant progress in solving a variety of real-world classification problems that range from linear to non-linear and also from simple to complex. In this research work, an Adaptive Hybrid Classification Model named as Genetic-based Linear Adaptive Skipping Training (GLAST) Algorithm has been proposed for the health-care dataset. It has been designed as two-stage process. In first stage, Genetic Algorithm (GA) is adapted to optimize the Learning rate. After optimizing the Learning rate, the optimal Learning rate has been set to the ANN model is ŋ = 1e−4. In the second stage, The training process is carried out using the Linear Adaptive Skipping Training (LAST) algorithm, which reduces the total training time and thus increases the training speed. As a result, the highlighted characteristics of LAST have been integrated with GA to accomplish rapid classification and enhance computational efficiency. On 8 different health-care datasets extracted from the UCI Repository, the proposed GLAST algorithm outperforms both the BPN and LAST algorithms in terms of accuracy and training time, according to simulation results. The result analyses have proved that the efficiency of this proposed GLAST Algorithm outperforms over the existing techniques such as BPN and LAST in terms of accuracy and training time. On various datasets, experimental results show that GLAST improves accuracy from 4 to 17% over BPN training algorithm and reduces overall training time from 10 to 57% over BPN training algorithm.

医疗保健行业正在使用机器学习技术,以更低的成本和更短的时间改善护理服务。人工神经网络(ANN)是众所周知的机器学习技术,用于诊断应用,但它也越来越多地被用于指导医疗管理决策。与此同时,在医疗保健行业,人工神经网络在解决从线性到非线性以及从简单到复杂的各种现实世界分类问题方面取得了重大进展。在这项研究工作中,针对医疗保健数据集,提出了一种自适应混合分类模型,称为基于遗传的线性自适应跳过训练(GLAST)算法。它被设计为两阶段过程。在第一阶段,采用遗传算法对学习率进行优化。在对学习率进行优化后,将最优学习率设置为ANN模型 = 1e−4.在第二阶段,使用线性自适应跳过训练(LAST)算法进行训练过程,减少了总训练时间,从而提高了训练速度。因此,将LAST突出的特性与遗传算法相结合,实现了快速分类,提高了计算效率。根据仿真结果,在从UCI存储库中提取的8个不同的医疗保健数据集上,所提出的GLAST算法在准确性和训练时间方面都优于BPN和LAST算法。结果分析表明,该算法在精度和训练时间方面均优于现有的BPN和LAST算法。在各种数据集上,实验结果表明,与BPN训练算法相比,GLAST将准确率从4%提高到17%,并将总训练时间从10%减少到57%。
{"title":"Design of adaptive hybrid classification model using genetic-based linear adaptive skipping training (GLAST) algorithm for health-care dataset","authors":"Manjula Devi Ramasamy,&nbsp;Keerthika Periasamy,&nbsp;Suresh Periasamy,&nbsp;Suresh Muthusamy,&nbsp;Hitesh Panchal,&nbsp;Pratik Arvindbhai Solanki,&nbsp;Kirti Panchal","doi":"10.1007/s43674-021-00030-8","DOIUrl":"10.1007/s43674-021-00030-8","url":null,"abstract":"<div><p>Machine-learning techniques are being used in the health-care industry to improve care delivery at a lower cost and in less time. Artificial Neural Network (ANN) is well-known machine-learning techniques for its diagnostic applications, but it is also increasingly being utilized to guide health-care management decisions. At the same time, in the healthcare industry, ANN has made significant progress in solving a variety of real-world classification problems that range from linear to non-linear and also from simple to complex. In this research work, an Adaptive Hybrid Classification Model named as Genetic-based Linear Adaptive Skipping Training (GLAST) Algorithm has been proposed for the health-care dataset. It has been designed as two-stage process. In first stage, Genetic Algorithm (GA) is adapted to optimize the Learning rate. After optimizing the Learning rate, the optimal Learning rate has been set to the ANN model is <i>ŋ</i> = 1<i>e</i>−4. In the second stage, The training process is carried out using the Linear Adaptive Skipping Training (LAST) algorithm, which reduces the total training time and thus increases the training speed. As a result, the highlighted characteristics of LAST have been integrated with GA to accomplish rapid classification and enhance computational efficiency. On 8 different health-care datasets extracted from the UCI Repository, the proposed GLAST algorithm outperforms both the BPN and LAST algorithms in terms of accuracy and training time, according to simulation results. The result analyses have proved that the efficiency of this proposed GLAST Algorithm outperforms over the existing techniques such as BPN and LAST in terms of accuracy and training time. On various datasets, experimental results show that GLAST improves accuracy from 4 to 17% over BPN training algorithm and reduces overall training time from 10 to 57% over BPN training algorithm.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50507453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Instance selection for big data based on locally sensitive hashing and double-voting mechanism 基于局部敏感哈希和双重投票机制的大数据实例选择
Pub Date : 2022-03-19 DOI: 10.1007/s43674-022-00033-z
Junhai Zhai, Yajie Huang

The increasing data volumes impose unprecedented challenges to traditional data mining in data preprocessing, learning, and analyzing, it has attracted much attention in designing efficient compressing, indexing and searching methods recently. Inspired by locally sensitive hashing (LSH), divide-and-conquer strategy, and double-voting mechanism, we proposed an iterative instance selection algorithm, which needs to run p rounds iteratively to reduce or eliminate the unwanted bias of the optimal solution by double-voting. In each iteration, the proposed algorithm partitions the big dataset into several subsets and distributes them to different computing nodes. In each node, the instances in local data subset are transformed into Hamming space by l hash function in parallel, and each instance is assigned to one of the l hash tables by the corresponding hash code, the instances with the same hash code are put into the same bucket. And then, a proportion of instances are randomly selected from each hash bucket in each hash table, and a subset is obtained. Thus, totally l subsets are obtained, which are used for voting to select the locally optimal instance subset. The process is repeated p times to obtain p subsets. Finally, the globally optimal instance subset is obtained by voting with the p subsets. The proposed algorithm is implemented with two open source big data platforms, Hadoop and Spark, and experimentally compared with three state-of-the-art methods on testing accuracy, compression ratio, and running time. The experimental results demonstrate that the proposed algorithm provides excellent performance and outperforms three baseline methods.

数据量的不断增加对传统的数据挖掘在数据预处理、学习和分析方面提出了前所未有的挑战,近年来,它在设计高效的压缩、索引和搜索方法方面备受关注。受局部敏感哈希(LSH)、分治策略和双重投票机制的启发,我们提出了一种迭代实例选择算法,该算法需要迭代运行p轮,以通过双重投票减少或消除最优解的不必要偏差。在每次迭代中,所提出的算法将大数据集划分为几个子集,并将它们分布到不同的计算节点。在每个节点中,本地数据子集中的实例通过l哈希函数并行转换到汉明空间,每个实例通过相应的哈希码分配给l个哈希表中的一个,具有相同哈希码的实例被放入同一个桶中。然后,从每个哈希表中的每个哈希桶中随机选择一定比例的实例,并获得一个子集。因此,总共获得了l个子集,这些子集用于投票来选择局部最优的实例子集。该过程重复p次以获得p个子集。最后,通过对p个子集进行投票,得到全局最优实例子集。该算法在Hadoop和Spark两个开源大数据平台上实现,并与三种最先进的方法在测试精度、压缩比和运行时间方面进行了实验比较。实验结果表明,该算法具有良好的性能,优于三种基线方法。
{"title":"Instance selection for big data based on locally sensitive hashing and double-voting mechanism","authors":"Junhai Zhai,&nbsp;Yajie Huang","doi":"10.1007/s43674-022-00033-z","DOIUrl":"10.1007/s43674-022-00033-z","url":null,"abstract":"<div><p>The increasing data volumes impose unprecedented challenges to traditional data mining in data preprocessing, learning, and analyzing, it has attracted much attention in designing efficient compressing, indexing and searching methods recently. Inspired by locally sensitive hashing (LSH), divide-and-conquer strategy, and double-voting mechanism, we proposed an iterative instance selection algorithm, which needs to run <i>p</i> rounds iteratively to reduce or eliminate the unwanted bias of the optimal solution by double-voting. In each iteration, the proposed algorithm partitions the big dataset into several subsets and distributes them to different computing nodes. In each node, the instances in local data subset are transformed into Hamming space by <i>l</i> hash function in parallel, and each instance is assigned to one of the <i>l</i> hash tables by the corresponding hash code, the instances with the same hash code are put into the same bucket. And then, a proportion of instances are randomly selected from each hash bucket in each hash table, and a subset is obtained. Thus, totally <i>l</i> subsets are obtained, which are used for voting to select the locally optimal instance subset. The process is repeated <i>p</i> times to obtain <i>p</i> subsets. Finally, the globally optimal instance subset is obtained by voting with the <i>p</i> subsets. The proposed algorithm is implemented with two open source big data platforms, Hadoop and Spark, and experimentally compared with three state-of-the-art methods on testing accuracy, compression ratio, and running time. The experimental results demonstrate that the proposed algorithm provides excellent performance and outperforms three baseline methods.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50496215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fingertip interactive tracking registration method for AR assembly system AR装配系统的指尖交互式跟踪配准方法
Pub Date : 2022-03-08 DOI: 10.1007/s43674-021-00025-5
Yong Jiu, Wei Jianguo, Wang Yangping, Dang Jianwu, Lei Xiaomei

Aiming at the problems of single input mode and lack of naturalness in the assembly process of existing AR systems, a tracking registration method of mobile AR assembly system is proposed based on multi-quantity and multi-degree of freedom natural fingertip interaction. Firstly, the real-time and stable tracking of hand area in complex environment is realized based on the hand region tracking; secondly, the fingertip detection and recognition based on K-COS and parallel vector is used to improve the precision and stability of fingertip recognition; thirdly, the special movement track of fingertip is recognized based on improved DTW algorithm, which has strong compatibility and feature gradient transformation for complex fingertip trajectory recognition; finally, through the real-time transformation of projection relationship between fingertip and virtual object, the interaction between fingertip and virtual object is made more natural and realistic. The experimental results show that in the complex environment of background, illumination, scale and rotation, the precision of fingertip detection and recognition is about 93%, and the precision of fingertip motion template matching is about 91%. The translation error of the registration method based on visual feature recognition is reduced by about 100pix compared with fingertip tracking registration method, and the efficiency of mobile AR-guided assembly method is improved by about 24.77% compared with the traditional manual assisted assembly method. These data verifies the strong interaction and practicability of the fingertips based on the user's multi-quantity and multi-degree of freedom features in the process of mobile AR guided assembly.

针对现有AR系统装配过程中输入模式单一、自然度不足的问题,提出了一种基于多量多自由度指尖自然交互的移动AR装配系统跟踪配准方法。首先,在手部区域跟踪的基础上,实现了复杂环境下手部区域的实时稳定跟踪;其次,采用基于K-COS和并行向量的指尖检测与识别方法,提高了指尖识别的精度和稳定性;再次,基于改进的DTW算法识别指尖的特殊运动轨迹,该算法对复杂指尖轨迹识别具有较强的兼容性和特征梯度变换;最后,通过对指尖与虚拟物体投影关系的实时转换,使指尖与虚拟对象的交互更加自然逼真。实验结果表明,在背景、光照、尺度和旋转的复杂环境下,指尖检测和识别的准确率约为93%,指尖运动模板匹配的准确度约为91%。与指尖跟踪配准方法相比,基于视觉特征识别的配准方法的平移误差降低了约100pix,移动AR引导装配方法的效率比传统的手动辅助装配方法提高了约24.77%。这些数据验证了在移动AR引导装配过程中,基于用户多数量、多自由度特征的指尖具有较强的交互性和实用性。
{"title":"Fingertip interactive tracking registration method for AR assembly system","authors":"Yong Jiu,&nbsp;Wei Jianguo,&nbsp;Wang Yangping,&nbsp;Dang Jianwu,&nbsp;Lei Xiaomei","doi":"10.1007/s43674-021-00025-5","DOIUrl":"10.1007/s43674-021-00025-5","url":null,"abstract":"<div><p>Aiming at the problems of single input mode and lack of naturalness in the assembly process of existing AR systems, a tracking registration method of mobile AR assembly system is proposed based on multi-quantity and multi-degree of freedom natural fingertip interaction. Firstly, the real-time and stable tracking of hand area in complex environment is realized based on the hand region tracking; secondly, the fingertip detection and recognition based on K-COS and parallel vector is used to improve the precision and stability of fingertip recognition; thirdly, the special movement track of fingertip is recognized based on improved DTW algorithm, which has strong compatibility and feature gradient transformation for complex fingertip trajectory recognition; finally, through the real-time transformation of projection relationship between fingertip and virtual object, the interaction between fingertip and virtual object is made more natural and realistic. The experimental results show that in the complex environment of background, illumination, scale and rotation, the precision of fingertip detection and recognition is about 93%, and the precision of fingertip motion template matching is about 91%. The translation error of the registration method based on visual feature recognition is reduced by about 100pix compared with fingertip tracking registration method, and the efficiency of mobile AR-guided assembly method is improved by about 24.77% compared with the traditional manual assisted assembly method. These data verifies the strong interaction and practicability of the fingertips based on the user's multi-quantity and multi-degree of freedom features in the process of mobile AR guided assembly.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-021-00025-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50463156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Sapientia: a Smart Campus model to promote device and application flexibility Sapientia:促进设备和应用灵活性的智能校园模式
Pub Date : 2022-02-09 DOI: 10.1007/s43674-022-00032-0
Bianca S. Brand, Sandro J. Rigo, Rodrigo M. Figueiredo, Jorge L. V. Barbosa

The expansion of Internet-of-Things and Information and Communication Technology allows the application of intelligent concepts to university campus spaces. Several Smart Campus models were implemented recently. However, solutions that foster flexibility in the incorporation of new hardware and software solutions on the existing infrastructure are still a gap, motivating this research. Sapientia smart campus model promotes flexibility by facilitating the incorporation of new solutions on existing infrastructure. The model’s architecture is composed of layers that facilitate technology management and update. A university campus received a model implementation, allowing the execution of experiments to evaluate the incorporation of new hardware and applications. These included a mobile application to support user orientation and internal applications to collect and process campus information, such as temperature. The experiments show how the model incorporates a new hardware component and two new applications on the existing infrastructure. Also, it evidenced the use of the installed devices in more than one application with distinct configurations and purposes.

物联网和信息通信技术的发展使智能概念能够应用于大学校园空间。最近实施了几种智能校园模式。然而,在现有基础设施上结合新的硬件和软件解决方案的灵活性方面,仍然存在差距,这推动了这项研究。Sapientia智能校园模式通过促进在现有基础设施上引入新的解决方案来提高灵活性。该模型的体系结构由便于技术管理和更新的层组成。一所大学校园收到了一个模型实现,允许执行实验来评估新硬件和应用程序的结合。其中包括支持面向用户的移动应用程序,以及收集和处理校园信息(如温度)的内部应用程序。实验表明,该模型如何在现有基础设施上集成一个新的硬件组件和两个新的应用程序。此外,它还证明了在多个具有不同配置和目的的应用程序中使用已安装的设备。
{"title":"Sapientia: a Smart Campus model to promote device and application flexibility","authors":"Bianca S. Brand,&nbsp;Sandro J. Rigo,&nbsp;Rodrigo M. Figueiredo,&nbsp;Jorge L. V. Barbosa","doi":"10.1007/s43674-022-00032-0","DOIUrl":"10.1007/s43674-022-00032-0","url":null,"abstract":"<div><p>The expansion of Internet-of-Things and Information and Communication Technology allows the application of intelligent concepts to university campus spaces. Several Smart Campus models were implemented recently. However, solutions that foster flexibility in the incorporation of new hardware and software solutions on the existing infrastructure are still a gap, motivating this research. Sapientia smart campus model promotes flexibility by facilitating the incorporation of new solutions on existing infrastructure. The model’s architecture is composed of layers that facilitate technology management and update. A university campus received a model implementation, allowing the execution of experiments to evaluate the incorporation of new hardware and applications. These included a mobile application to support user orientation and internal applications to collect and process campus information, such as temperature. The experiments show how the model incorporates a new hardware component and two new applications on the existing infrastructure. Also, it evidenced the use of the installed devices in more than one application with distinct configurations and purposes.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-022-00032-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50464924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Application of control strategies and machine learning techniques in prosthetic knee: a systematic review 控制策略和机器学习技术在人工膝关节中的应用:系统综述
Pub Date : 2022-02-07 DOI: 10.1007/s43674-021-00031-7
Rajesh Kumar Mohanty, R. C. Mohanty, Sukanta Kumar Sabut

This systematic review focuses on control strategies and machine learning techniques used in prosthetic knees for restoring mobility of individuals with trans-femoral amputations. Review and classification of control strategies that determine how these prosthetic knees interact with the user and gait strategy inspired algorithms for phase identification, locomotion mode, and motion intention recognition were studied. Relevant studies were identified using electronic databases such as PubMed, EMBASE, SCOPUS, and the Cochrane Controlled Trials Register (Rehabilitation and Related Therapies) up to April 2021. Abstracts were screened and inclusion and exclusion criteria were applied. Out of 278 potentially relevant studies, 65 articles were included. The specific variables on control approach, control modes, gait control, hardware level, machine learning algorithm, and measured signals mechanism were extracted and added to summary table. The results indicate that advanced methods for adapting position or torque depiction and automatic detection of terrains or gait modes are more commonly utilized, but they are largely limited to laboratory environments. It is concluded that a correct combination of control strategies and machine learning techniques will enable the improvement of prosthetic performance and enhance the standard of amputee’s lives.

这篇系统综述的重点是用于假肢膝盖的控制策略和机器学习技术,以恢复经股截肢患者的活动能力。研究了确定这些假肢膝盖如何与用户交互的控制策略的回顾和分类,以及步态策略启发的相位识别、运动模式和运动意图识别算法。截至2021年4月,使用PubMed、EMBASE、SCOPUS和Cochrane对照试验登记册(康复和相关治疗)等电子数据库确定了相关研究。对摘要进行筛选,并采用纳入和排除标准。在278项可能相关的研究中,纳入了65篇文章。提取了关于控制方法、控制模式、步态控制、硬件水平、机器学习算法和测量信号机制的具体变量,并将其添加到汇总表中。结果表明,更常用的是用于调整位置或扭矩描述以及自动检测地形或步态模式的先进方法,但它们在很大程度上仅限于实验室环境。结论是,控制策略和机器学习技术的正确结合将有助于改善假肢性能,提高截肢者的生活水平。
{"title":"Application of control strategies and machine learning techniques in prosthetic knee: a systematic review","authors":"Rajesh Kumar Mohanty,&nbsp;R. C. Mohanty,&nbsp;Sukanta Kumar Sabut","doi":"10.1007/s43674-021-00031-7","DOIUrl":"10.1007/s43674-021-00031-7","url":null,"abstract":"<div><p>This systematic review focuses on control strategies and machine learning techniques used in prosthetic knees for restoring mobility of individuals with trans-femoral amputations. Review and classification of control strategies that determine how these prosthetic knees interact with the user and gait strategy inspired algorithms for phase identification, locomotion mode, and motion intention recognition were studied. Relevant studies were identified using electronic databases such as PubMed, EMBASE, SCOPUS, and the Cochrane Controlled Trials Register (Rehabilitation and Related Therapies) up to April 2021. Abstracts were screened and inclusion and exclusion criteria were applied. Out of 278 potentially relevant studies, 65 articles were included. The specific variables on control approach, control modes, gait control, hardware level, machine learning algorithm, and measured signals mechanism were extracted and added to summary table. The results indicate that advanced methods for adapting position or torque depiction and automatic detection of terrains or gait modes are more commonly utilized, but they are largely limited to laboratory environments. It is concluded that a correct combination of control strategies and machine learning techniques will enable the improvement of prosthetic performance and enhance the standard of amputee’s lives.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-021-00031-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50458420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Advances in computational intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1