Pub Date : 2021-01-01DOI: 10.1016/j.cogr.2021.07.002
Zhen Zhang, Liucun Zhu, Xiaodong Zheng
In this paper, the dynamic engagement characteristics of wet clutch are simulated by finite element method. In the fluid friction, the average Reynolds equation is amended and dimensionless parameters are involved, which is applied to calculate the viscous torque. In the boundary friction, a surface elastic contact model is established to calculate rough contact torque. In the mixed friction, total torque consists of viscus torque and rough contact torque. Experimental comparisons between the simulations and the SAE#2 bench tests are provide to verify the validity of the proposed method, the engagement time errors, the output torques maximum errors and the output torques average errors are utmost 4.86%, 3.87% and 0.73% respectively. The proposed method can be used to guide the design of wet clutches in early stages of product development.
{"title":"Simulations versus tests for dynamic engagement characteristics of wet clutch","authors":"Zhen Zhang, Liucun Zhu, Xiaodong Zheng","doi":"10.1016/j.cogr.2021.07.002","DOIUrl":"10.1016/j.cogr.2021.07.002","url":null,"abstract":"<div><p>In this paper, the dynamic engagement characteristics of wet clutch are simulated by finite element method. In the fluid friction, the average Reynolds equation is amended and dimensionless parameters are involved, which is applied to calculate the viscous torque. In the boundary friction, a surface elastic contact model is established to calculate rough contact torque. In the mixed friction, total torque consists of viscus torque and rough contact torque. Experimental comparisons between the simulations and the SAE#2 bench tests are provide to verify the validity of the proposed method, the engagement time errors, the output torques maximum errors and the output torques average errors are utmost 4.86%, 3.87% and 0.73% respectively. The proposed method can be used to guide the design of wet clutches in early stages of product development.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 125-133"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.07.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88155385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a novel algorithm combining object detection and potential field algorithm for autonomous operation of SCARA arm. The start, obstacles, and goal states are located and detected through the RetinaNet Model. The model uses standard pre-trained weights as checkpoints which is trained with images from the working environment of the SCARA arm. The potential field algorithm then plans a suitable path from start to goal state avoiding obstacle state based on results from the object detection model. The algorithm is tested with a real prototype with promising results.
{"title":"Vision-based intelligent path planning for SCARA arm","authors":"Yogesh Gautam , Bibek Prajapati , Sandeep Dhakal , Bibek Pandeya , Bijendra Prajapati","doi":"10.1016/j.cogr.2021.09.002","DOIUrl":"10.1016/j.cogr.2021.09.002","url":null,"abstract":"<div><p>This paper proposes a novel algorithm combining object detection and potential field algorithm for autonomous operation of SCARA arm. The start, obstacles, and goal states are located and detected through the RetinaNet Model. The model uses standard pre-trained weights as checkpoints which is trained with images from the working environment of the SCARA arm. The potential field algorithm then plans a suitable path from start to goal state avoiding obstacle state based on results from the object detection model. The algorithm is tested with a real prototype with promising results.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 168-181"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241321000161/pdfft?md5=e9df1be748e973a1418b8b610e72d135&pid=1-s2.0-S2667241321000161-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83111735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1016/j.cogr.2021.06.003
Jiaxing Sun, Yujie Li
Big data-driven deep learning methods have been widely used in image or video segmentation. However, in practical applications, training a deep learning model requires a large amount of labeled data, which is difficult to achieve. Meta-learning, as one of the most promising research areas in the field of artificial intelligence, is believed to be a key tool for approaching artificial general intelligence. Compared with the traditional deep learning algorithm, meta-learning can update the learning task quickly and complete the corresponding learning with less data. To the best of our knowledge, there exist few researches in the meta-learning-based visual segmentation. To this end, this paper summarizes the algorithms and current situation of image or video segmentation technologies based on meta-learning and point out the future trends of meta-learning. Meta-learning has the characteristics of segmentation that based on semi-supervised or unsupervised learning, all the recent novel methods are summarized in this paper. The principle, advantages and disadvantages of each algorithms are also compared and analyzed.
{"title":"MetaSeg: A survey of meta-learning for image segmentation","authors":"Jiaxing Sun, Yujie Li","doi":"10.1016/j.cogr.2021.06.003","DOIUrl":"10.1016/j.cogr.2021.06.003","url":null,"abstract":"<div><p>Big data-driven deep learning methods have been widely used in image or video segmentation. However, in practical applications, training a deep learning model requires a large amount of labeled data, which is difficult to achieve. Meta-learning, as one of the most promising research areas in the field of artificial intelligence, is believed to be a key tool for approaching artificial general intelligence. Compared with the traditional deep learning algorithm, meta-learning can update the learning task quickly and complete the corresponding learning with less data. To the best of our knowledge, there exist few researches in the meta-learning-based visual segmentation. To this end, this paper summarizes the algorithms and current situation of image or video segmentation technologies based on meta-learning and point out the future trends of meta-learning. Meta-learning has the characteristics of segmentation that based on semi-supervised or unsupervised learning, all the recent novel methods are summarized in this paper. The principle, advantages and disadvantages of each algorithms are also compared and analyzed.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 83-91"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.06.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89860935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1016/j.cogr.2021.02.001
Jincai Zhang, Mei Wang
A brain-computer interface (BCI) can provide a communication approach conveying brain information to the outside. Especially, the BCIs based on motor imagery play the important role for the brain-controlled robots, such as the rehabilitation robots, the wheelchair robots, the nursing bed robots, the unmanned aerial vehicles and so on. In this paper, the developments of the robots based on motor imagery BCIs are reviewed from three aspects: the electroencephalogram (EEG) evocation paradigms, the signal processing algorithms and the applications. First, the different types of the brain-controlled robots are reviewed and classified from the perspective of the evocation paradigms. Second, the relevant algorithms for the EEG signal processing are introduced, which including feature extraction methods and the classification algorithms. Third, the applications of the motor imagery brain-controlled robots are summarized. Finally, the current challenges and the future research directions of the robots controlled by the motor imagery BCIs are discussed.
{"title":"A survey on robots controlled by motor imagery brain-computer interfaces","authors":"Jincai Zhang, Mei Wang","doi":"10.1016/j.cogr.2021.02.001","DOIUrl":"10.1016/j.cogr.2021.02.001","url":null,"abstract":"<div><p>A brain-computer interface (BCI) can provide a communication approach conveying brain information to the outside. Especially, the BCIs based on motor imagery play the important role for the brain-controlled robots, such as the rehabilitation robots, the wheelchair robots, the nursing bed robots, the unmanned aerial vehicles and so on. In this paper, the developments of the robots based on motor imagery BCIs are reviewed from three aspects: the electroencephalogram (EEG) evocation paradigms, the signal processing algorithms and the applications. First, the different types of the brain-controlled robots are reviewed and classified from the perspective of the evocation paradigms. Second, the relevant algorithms for the EEG signal processing are introduced, which including feature extraction methods and the classification algorithms. Third, the applications of the motor imagery brain-controlled robots are summarized. Finally, the current challenges and the future research directions of the robots controlled by the motor imagery BCIs are discussed.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 12-24"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.02.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"94276831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Makeup transfer is one of the applications of image style transfer, which refers to transfer the reference makeup to the face without makeup, and maintaining the original appearance of the plain face and the makeup style of the reference face. In order to understand the research status of makeup transfer, this paper systematically sorts out makeup transfer technology. According to the development process of the method of makeup transfer, our paper first introduces and analyzes the traditional methods of makeup transfer. In particular, the methods of makeup transfer based on deep learning framework are summarized, covering both disadvantages and advantages. Finally, some key points in the current challenges and future development direction of makeup transfer technology are discussed.
{"title":"Deep learning method for makeup style transfer: A survey","authors":"Xiaohan Ma , Fengquan Zhang , Huan Wei , Liuqing Xu","doi":"10.1016/j.cogr.2021.09.001","DOIUrl":"10.1016/j.cogr.2021.09.001","url":null,"abstract":"<div><p>Makeup transfer is one of the applications of image style transfer, which refers to transfer the reference makeup to the face without makeup, and maintaining the original appearance of the plain face and the makeup style of the reference face. In order to understand the research status of makeup transfer, this paper systematically sorts out makeup transfer technology. According to the development process of the method of makeup transfer, our paper first introduces and analyzes the traditional methods of makeup transfer. In particular, the methods of makeup transfer based on deep learning framework are summarized, covering both disadvantages and advantages. Finally, some key points in the current challenges and future development direction of makeup transfer technology are discussed.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 182-187"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266724132100015X/pdfft?md5=c5178cad6941ffa98c8c774fb2ac3ca3&pid=1-s2.0-S266724132100015X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82114756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1016/j.cogr.2021.06.001
Mohd Javaid , Abid Haleem , Ravi Pratap Singh , Rajiv Suman
There is the increased application of new technologies in manufacturing, service, and communications. Industry 4.0 is the new fourth industrial revolution, which supports organisational efficiency. Robotics is an important technology of Industry 4.0, which provides extensive capabilities in the field of manufacturing. This technology has enhanced automation systems and does repetitive jobs precisely and at a lower cost. Robotics is progressively leading to the manufacturing of quality products while maintaining the value of existing collaborators schemes. The primary outcome of Industry 4.0 is intelligent factories developed with the aid of advanced robotics, massive data, cloud computing, solid safety, intelligent sensors, the Internet of things, and other advanced technological developments to be highly powerful, safe, and cost-effective. Thus, businesses will refine their manufacturing for mass adaptation by improving the workplace's safety and reliability on actual work and saving costs. This paper discusses the significant potential of Robotics in the field of manufacturing and allied areas. The paper discusses eighteen major applications of Robotics for Industry 4.0. Robots are ideal for collecting mysterious manufacturing data as they operate closer to the component than most other factory machines. This technology is helpful to perform a complex hazardous job, automation, sustain high temperature, working entire time and for a long duration in assembly lines. Many robots operating in intelligent factories use artificial intelligence to perform high-level tasks. Now they can also decide and learn from experience in various ongoing situations.
{"title":"Substantial capabilities of robotics in enhancing industry 4.0 implementation","authors":"Mohd Javaid , Abid Haleem , Ravi Pratap Singh , Rajiv Suman","doi":"10.1016/j.cogr.2021.06.001","DOIUrl":"10.1016/j.cogr.2021.06.001","url":null,"abstract":"<div><p>There is the increased application of new technologies in manufacturing, service, and communications. Industry 4.0 is the new fourth industrial revolution, which supports organisational efficiency. Robotics is an important technology of Industry 4.0, which provides extensive capabilities in the field of manufacturing. This technology has enhanced automation systems and does repetitive jobs precisely and at a lower cost. Robotics is progressively leading to the manufacturing of quality products while maintaining the value of existing collaborators schemes. The primary outcome of Industry 4.0 is intelligent factories developed with the aid of advanced robotics, massive data, cloud computing, solid safety, intelligent sensors, the Internet of things, and other advanced technological developments to be highly powerful, safe, and cost-effective. Thus, businesses will refine their manufacturing for mass adaptation by improving the workplace's safety and reliability on actual work and saving costs. This paper discusses the significant potential of Robotics in the field of manufacturing and allied areas. The paper discusses eighteen major applications of Robotics for Industry 4.0. Robots are ideal for collecting mysterious manufacturing data as they operate closer to the component than most other factory machines. This technology is helpful to perform a complex hazardous job, automation, sustain high temperature, working entire time and for a long duration in assembly lines. Many robots operating in intelligent factories use artificial intelligence to perform high-level tasks. Now they can also decide and learn from experience in various ongoing situations.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 58-75"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.06.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"108045927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1016/j.cogr.2021.04.001
Jiang Wang, Mei Wang
As a subjectively psychological and physiological response to external stimuli, emotion is ubiquitous in our daily life. With the continuous development of the artificial intelligence and brain science, emotion recognition rapidly becomes a multiple discipline research field through EEG signals. This paper investigates the relevantly scientific literature in the past five years and reviews the emotional feature extraction methods and the classification methods using EEG signals. Commonly used feature extraction analysis methods include time domain analysis, frequency domain analysis, and time-frequency domain analysis. The widely used classification methods include machine learning algorithms based on Support Vector Machine (SVM), k-Nearest Neighbor (KNN), Naive Bayes (NB), etc., and their classification accuracy ranges from 57.50% to 95.70%. The classification accuracy of the deep learning algorithms based on Neural Network (NN), Long and Short-Term Memory (LSTM), and Deep Belief Network (DBN) ranges from 63.38% to 97.56%.
{"title":"Review of the emotional feature extraction and classification using EEG signals","authors":"Jiang Wang, Mei Wang","doi":"10.1016/j.cogr.2021.04.001","DOIUrl":"10.1016/j.cogr.2021.04.001","url":null,"abstract":"<div><p>As a subjectively psychological and physiological response to external stimuli, emotion is ubiquitous in our daily life. With the continuous development of the artificial intelligence and brain science, emotion recognition rapidly becomes a multiple discipline research field through EEG signals. This paper investigates the relevantly scientific literature in the past five years and reviews the emotional feature extraction methods and the classification methods using EEG signals. Commonly used feature extraction analysis methods include time domain analysis, frequency domain analysis, and time-frequency domain analysis. The widely used classification methods include machine learning algorithms based on Support Vector Machine (SVM), k-Nearest Neighbor (KNN), Naive Bayes (NB), etc., and their classification accuracy ranges from 57.50% to 95.70%. The classification accuracy of the deep learning algorithms based on Neural Network (NN), Long and Short-Term Memory (LSTM), and Deep Belief Network (DBN) ranges from 63.38% to 97.56%.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"1 ","pages":"Pages 29-40"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cogr.2021.04.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"95534010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-05-12DOI: 10.1609/icaps.v20i1.13421
M. Göbelbecker, Thomas Keller, Patrick Eyerich, Michael Brenner, B. Nebel
When using a planner-based agent architecture, many things can go wrong. First and foremost, an agent might fail to execute one of the planned actions for some reasons. Even more annoying, however, is a situation where the agent is incompetent, i.e., unable to come up with a plan. This might be due to the fact that there are principal reasons that prohibit a successful plan or simply because the task's description is incomplete or incorrect. In either case, an explanation for such a failure would be very helpful. We will address this problem and provide a formalization of coming up with excuses for not being able to find a plan. Based on that, we will present an algorithm that is able to find excuses and demonstrate that such excuses can be found in practical settings in reasonable time.
{"title":"Coming up With Good Excuses: What to do When no Plan Can be Found","authors":"M. Göbelbecker, Thomas Keller, Patrick Eyerich, Michael Brenner, B. Nebel","doi":"10.1609/icaps.v20i1.13421","DOIUrl":"https://doi.org/10.1609/icaps.v20i1.13421","url":null,"abstract":"\u0000 \u0000 When using a planner-based agent architecture, many things can go wrong. First and foremost, an agent might fail to execute one of the planned actions for some reasons. Even more annoying, however, is a situation where the agent is incompetent, i.e., unable to come up with a plan. This might be due to the fact that there are principal reasons that prohibit a successful plan or simply because the task's description is incomplete or incorrect. In either case, an explanation for such a failure would be very helpful. We will address this problem and provide a formalization of coming up with excuses for not being able to find a plan. Based on that, we will present an algorithm that is able to find excuses and demonstrate that such excuses can be found in practical settings in reasonable time.\u0000 \u0000","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"68 1","pages":"81-88"},"PeriodicalIF":0.0,"publicationDate":"2010-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78646645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}