Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251435
G. Trovato, J. J. G. Ramos, Helio Azevedo, A. Moroni, Silvia Magossi, H. Ishii, R. Simmons, A. Takanishi
In the near future, robots are expected to perform assistive tasks and do different kind of jobs. In particular, humanoid robots are possible candidates for being used as helpers in activities of daily living. One possible employment is working as a receptionist, providing useful indications to visitors in a public office. The design of how robots could look like is a matter of growing importance, as it is important to create a look that poses no uncanny valley effects on the human user and that is appropriate to potentially serve in different job areas in human society. In this paper we want to describe the study on anthropomorphism of a receptionist robot. We invited Brazilian people with different education levels to interact with two variations of a receptionist robot, different in physical appearance as well as in the sound of the voice. In one case, the appearance was a conversational agent made with computer graphics, in the other, a real humanoid robot. Results gathered from surveys and comments of the participants provide useful directions to design a receptionist robot and insights on the effect of digital divide on anthropomorphism.
{"title":"“Olá, my name is Ana”: A study on Brazilians interacting with a receptionist robot","authors":"G. Trovato, J. J. G. Ramos, Helio Azevedo, A. Moroni, Silvia Magossi, H. Ishii, R. Simmons, A. Takanishi","doi":"10.1109/ICAR.2015.7251435","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251435","url":null,"abstract":"In the near future, robots are expected to perform assistive tasks and do different kind of jobs. In particular, humanoid robots are possible candidates for being used as helpers in activities of daily living. One possible employment is working as a receptionist, providing useful indications to visitors in a public office. The design of how robots could look like is a matter of growing importance, as it is important to create a look that poses no uncanny valley effects on the human user and that is appropriate to potentially serve in different job areas in human society. In this paper we want to describe the study on anthropomorphism of a receptionist robot. We invited Brazilian people with different education levels to interact with two variations of a receptionist robot, different in physical appearance as well as in the sound of the voice. In one case, the appearance was a conversational agent made with computer graphics, in the other, a real humanoid robot. Results gathered from surveys and comments of the participants provide useful directions to design a receptionist robot and insights on the effect of digital divide on anthropomorphism.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134058453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251455
Ali Nail Inal, Ö. Morgül, U. Saranlı
Reactive methods for motion planning offer robustness advantages in the presence of large disturbances for robotic systems. Unfortunately, it is difficult to generalize these reactive methods to underactuated systems and existing research predominantly focuses on solutions based on the high-gain tracking of dynamically feasible trajectories. Self-balancing mobile robots with spherical wheels, BallBot platforms, are rich examples of such underactuated mechanisms where motion planning has traditionally been done through trajectory tracking on actuated system states with little explicit feedback on external states that are not directly controlled. In this paper, we propose a reactive path-following controller for external states of such platforms, eliminating the need to follow time-parameterized state trajectories. We first define the path-following problem in general, and present how it can be realized through a parallel composition of existing, two-dimensional controllers for the BallBot morphology. We then show in simulation how this idea can be used to achieve asymptotic convergence to a linear path with a constant forward velocity. We also show how the basic idea can be generalized to more complex path shapes such as circles, towards an eventual deployment in a more complete motion planning framework based on sequential composition.
{"title":"Path following with an underactuated self-balancing spherical-wheel mobile robot","authors":"Ali Nail Inal, Ö. Morgül, U. Saranlı","doi":"10.1109/ICAR.2015.7251455","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251455","url":null,"abstract":"Reactive methods for motion planning offer robustness advantages in the presence of large disturbances for robotic systems. Unfortunately, it is difficult to generalize these reactive methods to underactuated systems and existing research predominantly focuses on solutions based on the high-gain tracking of dynamically feasible trajectories. Self-balancing mobile robots with spherical wheels, BallBot platforms, are rich examples of such underactuated mechanisms where motion planning has traditionally been done through trajectory tracking on actuated system states with little explicit feedback on external states that are not directly controlled. In this paper, we propose a reactive path-following controller for external states of such platforms, eliminating the need to follow time-parameterized state trajectories. We first define the path-following problem in general, and present how it can be realized through a parallel composition of existing, two-dimensional controllers for the BallBot morphology. We then show in simulation how this idea can be used to achieve asymptotic convergence to a linear path with a constant forward velocity. We also show how the basic idea can be generalized to more complex path shapes such as circles, towards an eventual deployment in a more complete motion planning framework based on sequential composition.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115235353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251461
Carmelo Di Franco, Alessandra Melani, Mauro Marinoni
Monitoring teams of mobile nodes is becoming crucial in a growing number of activities. Where it is not possible to use fixed references or external measurements, one of the possible solutions involves deriving relative positions from local communication. Well-known techniques such as trilateration and multilateration exist to locate a single node although such methods are not designed to locate entire teams. The technique of Multidimensional Scaling (MDS), however, allow us to find the relative coordinates of entire teams starting from the knowledge of the inter-node distances. However, like every relative-localization technique, it suffers from geometrical ambiguities including rotation, translation, and flip. In this work, we address such ambiguities by exploiting the node velocities to correlate the relative maps at two consecutive instants. In particular, we introduce a new version of MDS, called enhanced Multidimensional Scaling (eMDS), which is able to handle these types of ambiguities. The effectiveness of our localization technique is then validated by a set of simulation experiments and our results are compared against existing approaches.
{"title":"Solving ambiguities in MDS relative localization","authors":"Carmelo Di Franco, Alessandra Melani, Mauro Marinoni","doi":"10.1109/ICAR.2015.7251461","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251461","url":null,"abstract":"Monitoring teams of mobile nodes is becoming crucial in a growing number of activities. Where it is not possible to use fixed references or external measurements, one of the possible solutions involves deriving relative positions from local communication. Well-known techniques such as trilateration and multilateration exist to locate a single node although such methods are not designed to locate entire teams. The technique of Multidimensional Scaling (MDS), however, allow us to find the relative coordinates of entire teams starting from the knowledge of the inter-node distances. However, like every relative-localization technique, it suffers from geometrical ambiguities including rotation, translation, and flip. In this work, we address such ambiguities by exploiting the node velocities to correlate the relative maps at two consecutive instants. In particular, we introduce a new version of MDS, called enhanced Multidimensional Scaling (eMDS), which is able to handle these types of ambiguities. The effectiveness of our localization technique is then validated by a set of simulation experiments and our results are compared against existing approaches.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125577444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251510
Mirko Wächter, T. Asfour
Understanding human actions is an indispensable capability of humanoid robots which acquire task knowledge from human demonstration. Segmentation of such continuous demonstrations into meaningful segments reduces the complexity of understanding an observed task. In this paper, we propose a two-level hierarchical action segmentation approach which considers semantics of an action in addition to human motion characteristics. On the first level, a semantic segmentation is performed based on contact relations between human end-effectors, the scene, and between objects in the scene. On the second level, the semantic segments are further sub-divided based on a novel heuristic that incorporates the motion characteristics into the segmentation procedure. As input for the segmentation, we present an observation method for tracking the human as well as the objects and the environment. 6D pose trajectories of the human's hands and all objects are extracted in a precise and robust manner from data of a marker-based tracking system. We evaluated and compared our approach with a manual reference segmentation and well-known segmentation algorithms based on PCA and zero-velocity-crossings using 13 human demonstrations of daily activities.We show that significantly smaller segmentation errors are achieved with our approach while providing the necessary granularity for representing human demonstrations.
{"title":"Hierarchical segmentation of manipulation actions based on object relations and motion characteristics","authors":"Mirko Wächter, T. Asfour","doi":"10.1109/ICAR.2015.7251510","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251510","url":null,"abstract":"Understanding human actions is an indispensable capability of humanoid robots which acquire task knowledge from human demonstration. Segmentation of such continuous demonstrations into meaningful segments reduces the complexity of understanding an observed task. In this paper, we propose a two-level hierarchical action segmentation approach which considers semantics of an action in addition to human motion characteristics. On the first level, a semantic segmentation is performed based on contact relations between human end-effectors, the scene, and between objects in the scene. On the second level, the semantic segments are further sub-divided based on a novel heuristic that incorporates the motion characteristics into the segmentation procedure. As input for the segmentation, we present an observation method for tracking the human as well as the objects and the environment. 6D pose trajectories of the human's hands and all objects are extracted in a precise and robust manner from data of a marker-based tracking system. We evaluated and compared our approach with a manual reference segmentation and well-known segmentation algorithms based on PCA and zero-velocity-crossings using 13 human demonstrations of daily activities.We show that significantly smaller segmentation errors are achieved with our approach while providing the necessary granularity for representing human demonstrations.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126381463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robustness in task execution requires tight integration of continual planning, monitoring, reasoning and learning processes. In this paper, we investigate how robustness can be ensured by learning from experience. Our approach is based on a learning guided planning process for a robot that gains its experience from action execution failures through lifelong experiential learning. Inductive Logic Programming (ILP) is used as the learning method to frame hypotheses for failure situations. It provides first-order logic representation of the robot's experience. The robot uses this experience to construct heuristics to guide its future decisions. The performance of the learning guided planning process is analyzed on our Pioneer 3-AT robot. The results reveal that the hypotheses framed for failure cases are sound and ensure safety and robustness in future tasks of the robot.
{"title":"Robust task execution through experience-based guidance for cognitive robots","authors":"Sanem Sariel, Petek Yildiz, Sertac Karapinar, Dogan Altan, Melis Kapotoglu","doi":"10.1109/ICAR.2015.7251527","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251527","url":null,"abstract":"Robustness in task execution requires tight integration of continual planning, monitoring, reasoning and learning processes. In this paper, we investigate how robustness can be ensured by learning from experience. Our approach is based on a learning guided planning process for a robot that gains its experience from action execution failures through lifelong experiential learning. Inductive Logic Programming (ILP) is used as the learning method to frame hypotheses for failure situations. It provides first-order logic representation of the robot's experience. The robot uses this experience to construct heuristics to guide its future decisions. The performance of the learning guided planning process is analyzed on our Pioneer 3-AT robot. The results reveal that the hypotheses framed for failure cases are sound and ensure safety and robustness in future tasks of the robot.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126525782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251497
Benjamin Metka, Annika Besetzny, Ute Bauer-Wersing, M. Franzius
Many vision based localization methods extract local visual features to build a sparse map of the environment and estimate the position of the camera from feature correspondences. However, the majority of features is typically only detectable for short time-frames so that most information in the map becomes obsolete over longer periods of time. Long-term localization is therefore a challenging problem especially in outdoor scenarios where the appearance of the environment can change drastically due to different day times, weather conditions or seasonal effects. We propose to learn a model of stable and unstable feature characteristics from texture and color information around detected interest points that allows to predict the robustness of visual features. The model can be incorporated into the conventional feature extraction and matching process to reject potentially unstable features during the mapping phase. The application of the additional filtering step yields more compact maps and therefore reduces the probability of false positive matches, which can cause complete failure of a localization system. The model is trained with recordings of a train journey on the same track across seasons which facilitates the identification of stable and unstable features. Experiments on data of the same domain demonstrate the generalization capabilities of the learned characteristics.
{"title":"Predicting the long-term robustness of visual features","authors":"Benjamin Metka, Annika Besetzny, Ute Bauer-Wersing, M. Franzius","doi":"10.1109/ICAR.2015.7251497","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251497","url":null,"abstract":"Many vision based localization methods extract local visual features to build a sparse map of the environment and estimate the position of the camera from feature correspondences. However, the majority of features is typically only detectable for short time-frames so that most information in the map becomes obsolete over longer periods of time. Long-term localization is therefore a challenging problem especially in outdoor scenarios where the appearance of the environment can change drastically due to different day times, weather conditions or seasonal effects. We propose to learn a model of stable and unstable feature characteristics from texture and color information around detected interest points that allows to predict the robustness of visual features. The model can be incorporated into the conventional feature extraction and matching process to reject potentially unstable features during the mapping phase. The application of the additional filtering step yields more compact maps and therefore reduces the probability of false positive matches, which can cause complete failure of a localization system. The model is trained with recordings of a train journey on the same track across seasons which facilitates the identification of stable and unstable features. Experiments on data of the same domain demonstrate the generalization capabilities of the learned characteristics.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"219 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124336129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251491
M. Rachedi, M. Bouri, B. Hemici
This work presents the implementation of two model based control schemes in parallel mechanisms; the H∞ robust control designed by the mixed sensitivity approach and the Computed Torque Control (CTC). Simulations results are illustrated for the 3DOF parallel robot namely the “Delta robot”. Performances of the CTC and the H∞ controllers are compared for a semi elliptic pick and place trajectory and with a parabolic position profile. The movement dynamic is increased up to the acceleration of 12 G and a velocity of 1m/s. For robustness tests, parametric disturbances are introduced by additional loads on the travelling plate of the Delta robot. The simulation results performed on a SimMechanics model of the Delta robot show that the H∞ controller presents better performances and robustness compared to the CTC controller. It is implemented in the control scheme without the incorporation of the inverse dynamic model (IDM) of the system which reduces significantly the computation time of the control law.
{"title":"Robust control of a parallel robot","authors":"M. Rachedi, M. Bouri, B. Hemici","doi":"10.1109/ICAR.2015.7251491","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251491","url":null,"abstract":"This work presents the implementation of two model based control schemes in parallel mechanisms; the H∞ robust control designed by the mixed sensitivity approach and the Computed Torque Control (CTC). Simulations results are illustrated for the 3DOF parallel robot namely the “Delta robot”. Performances of the CTC and the H∞ controllers are compared for a semi elliptic pick and place trajectory and with a parabolic position profile. The movement dynamic is increased up to the acceleration of 12 G and a velocity of 1m/s. For robustness tests, parametric disturbances are introduced by additional loads on the travelling plate of the Delta robot. The simulation results performed on a SimMechanics model of the Delta robot show that the H∞ controller presents better performances and robustness compared to the CTC controller. It is implemented in the control scheme without the incorporation of the inverse dynamic model (IDM) of the system which reduces significantly the computation time of the control law.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123448455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251518
Yingfeng Chen, Wei Shuai, Xiaoping Chen
In this paper, a probabilistic quadtree map is presented instead of traditional grids map which is used widely in robot mapping and localization field yet is confronted with prohibitive storage consumption. A quadtree is a well-known data structure capable of achieving compact and efficient representation of large two-dimensional environments. We extend this basic idea by integrating with probabilistic framework and propose a clamping scheme to update the map occupancy probability value, which eliminates the uncertainty of the system and facilitates data compression. Meanwhile, in order to speed the operation of locating quadtree nodes, a coding rule between a node coordinate and its corresponding access key is adopted. We also discuss a new implementation of the Rao-Blackwellized particle filter simultaneous localization and mapping (SLAM) based on quadtree representation. Experiments are conducted in different sizes of areas (even in a shopping mall of 23,700 m2) demonstrate that the SLAM algorithm based on quadtree representation works excellently compared to grids map especially in large scale environments.
{"title":"A probabilistic, variable-resolution and effective quadtree representation for mapping of large environments","authors":"Yingfeng Chen, Wei Shuai, Xiaoping Chen","doi":"10.1109/ICAR.2015.7251518","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251518","url":null,"abstract":"In this paper, a probabilistic quadtree map is presented instead of traditional grids map which is used widely in robot mapping and localization field yet is confronted with prohibitive storage consumption. A quadtree is a well-known data structure capable of achieving compact and efficient representation of large two-dimensional environments. We extend this basic idea by integrating with probabilistic framework and propose a clamping scheme to update the map occupancy probability value, which eliminates the uncertainty of the system and facilitates data compression. Meanwhile, in order to speed the operation of locating quadtree nodes, a coding rule between a node coordinate and its corresponding access key is adopted. We also discuss a new implementation of the Rao-Blackwellized particle filter simultaneous localization and mapping (SLAM) based on quadtree representation. Experiments are conducted in different sizes of areas (even in a shopping mall of 23,700 m2) demonstrate that the SLAM algorithm based on quadtree representation works excellently compared to grids map especially in large scale environments.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123173952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251490
Ahmad Yasser Afaghani, Y. Aiyama
This work addresses collision detection between n-robot industrial manipulators which are controlled using point-to-point commands in on-line mode. The robots are sharing the same workspace and have no prior knowledge of the commands which will be sent after start up the system. We have proposed an advanced collision map concept for detecting the collisions between only two robot manipulators. In this work, the advanced collision map method is generalized to detect the collisions between the whole body of n-robot and representing them as collision areas on single 2D-space map. To realize a collision-free trajectory of the robots, time scheduling of command execution time is successful to be applied to avoid collision areas on the map. The method is tested on an openGL-based simulator for demonstrating the effectiveness of the method.
{"title":"On-line collision detection of n-robot industrial manipulators using advanced collision map","authors":"Ahmad Yasser Afaghani, Y. Aiyama","doi":"10.1109/ICAR.2015.7251490","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251490","url":null,"abstract":"This work addresses collision detection between n-robot industrial manipulators which are controlled using point-to-point commands in on-line mode. The robots are sharing the same workspace and have no prior knowledge of the commands which will be sent after start up the system. We have proposed an advanced collision map concept for detecting the collisions between only two robot manipulators. In this work, the advanced collision map method is generalized to detect the collisions between the whole body of n-robot and representing them as collision areas on single 2D-space map. To realize a collision-free trajectory of the robots, time scheduling of command execution time is successful to be applied to avoid collision areas on the map. The method is tested on an openGL-based simulator for demonstrating the effectiveness of the method.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130412827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251466
Mohsen Falahi, B. Beigzadeh, Z. Bank, R. Rajabli
Since holonomic wheeled mechanisms are used in narrow and crowded areas, designing a cognitive ergonomic interface for navigating these mechanisms is a challenging issue in robotics. In this paper, a novel holonomic mechanism with four omnidirectional wheels is firstly presented with the aim of increasing the maneuverability of holonomic wheeled robots by decreasing the volume of the mechanism. This reduction is performed by modifying the conventional arrangement of actuators by using SCAMPER method. Thus, a novel cognitive-ergonomic interface is presented for navigating such mechanisms. According to the statistic data this interface is mind compatible and user friendly. On the other hand with this navigation interface, users can interact with a holonomic mechanism easily without any previous training.
{"title":"Design and implementation of a cognitive-ergonomic navigation interface on an optimized holonomic mechanism","authors":"Mohsen Falahi, B. Beigzadeh, Z. Bank, R. Rajabli","doi":"10.1109/ICAR.2015.7251466","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251466","url":null,"abstract":"Since holonomic wheeled mechanisms are used in narrow and crowded areas, designing a cognitive ergonomic interface for navigating these mechanisms is a challenging issue in robotics. In this paper, a novel holonomic mechanism with four omnidirectional wheels is firstly presented with the aim of increasing the maneuverability of holonomic wheeled robots by decreasing the volume of the mechanism. This reduction is performed by modifying the conventional arrangement of actuators by using SCAMPER method. Thus, a novel cognitive-ergonomic interface is presented for navigating such mechanisms. According to the statistic data this interface is mind compatible and user friendly. On the other hand with this navigation interface, users can interact with a holonomic mechanism easily without any previous training.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131091837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}