In recent years, deep learning approaches have been leveraged to achieve impressive results in object recognition. However, such techniques are problematic in real world robotics applications because of the burden of collecting and labeling training images. We present a framework by which we can direct a robot to acquire domain-relevant data with little human effort. This framework is situated in a lifelong learning paradigm by which the robot can be more intelligent about how it collects and stores data over time. By iteratively training only on image views that increase classifier performance, our approach is able to collect representative views of objects with fewer data requirements for longterm storage of datasets. We show that our approach for acquiring domain-relevant data leads to a significant improvement in classification performance on in-domain objects compared to using available pre-constructed datasets. Additionally, our iterative view sampling method is able to find a good balance between classifier performance and data storage constraints.
{"title":"Learning Object Classifiers with Limited Human Supervision on a Physical Robot","authors":"Christopher Eriksen, A. Nicolai, W. Smart","doi":"10.1109/IRC.2018.00060","DOIUrl":"https://doi.org/10.1109/IRC.2018.00060","url":null,"abstract":"In recent years, deep learning approaches have been leveraged to achieve impressive results in object recognition. However, such techniques are problematic in real world robotics applications because of the burden of collecting and labeling training images. We present a framework by which we can direct a robot to acquire domain-relevant data with little human effort. This framework is situated in a lifelong learning paradigm by which the robot can be more intelligent about how it collects and stores data over time. By iteratively training only on image views that increase classifier performance, our approach is able to collect representative views of objects with fewer data requirements for longterm storage of datasets. We show that our approach for acquiring domain-relevant data leads to a significant improvement in classification performance on in-domain objects compared to using available pre-constructed datasets. Additionally, our iterative view sampling method is able to find a good balance between classifier performance and data storage constraints.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116837188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human-robot collaboration plays a strong role in industrial production processes. The ISO/TS 15066 defines four different methods of collaboration between humans and robots. So far, there was no robotic system available that incorporates all four collaboration methods at once. Especially for the speed and separation monitoring, there was no sensor system available that can easily be attached directly to an off-the-shelf industrial robot arm and that is capable of detecting obstacles in distances from a few millimeters up to five meters. This paper presented first results of using a 3D time-of-flight camera directly on an industrial robot arm for obstacle detection in human-robot collaboration. We attached a Visionary-T camera from SICK to the flange of a KUKA LBR iiwa 7 R800. With Matlab, we evaluated the pictures and found that it works very well for detecting obstacles in a distance range starting from 0.5 m and up to 5 m.
{"title":"Towards Safe Speed and Separation Monitoring in Human-Robot Collaboration with 3D-Time-of-Flight Cameras","authors":"Urban B. Himmelsbach, T. Wendt, Matthias Lai","doi":"10.1109/IRC.2018.00042","DOIUrl":"https://doi.org/10.1109/IRC.2018.00042","url":null,"abstract":"Human-robot collaboration plays a strong role in industrial production processes. The ISO/TS 15066 defines four different methods of collaboration between humans and robots. So far, there was no robotic system available that incorporates all four collaboration methods at once. Especially for the speed and separation monitoring, there was no sensor system available that can easily be attached directly to an off-the-shelf industrial robot arm and that is capable of detecting obstacles in distances from a few millimeters up to five meters. This paper presented first results of using a 3D time-of-flight camera directly on an industrial robot arm for obstacle detection in human-robot collaboration. We attached a Visionary-T camera from SICK to the flange of a KUKA LBR iiwa 7 R800. With Matlab, we evaluated the pictures and found that it works very well for detecting obstacles in a distance range starting from 0.5 m and up to 5 m.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130903765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Piergiuseppe Mallozzi, Raúl Pardo, Vincent Duplessis, Patrizio Pelliccione, G. Schneider
Reinforcement learning (RL) is a machine learning technique that has been increasingly used in robotic systems. In reinforcement learning, instead of manually pre-program what action to take at each step, we convey the goal a software agent in terms of reward functions. The agent tries different actions in order to maximize a numerical value, i.e. the reward. A misspecified reward function can cause problems such as reward hacking, where the agent finds out ways that maximize the reward without achieving the intended goal. As RL agents become more general and autonomous, the design of reward functions that elicit the desired behaviour in the agent becomes more important and cumbersome. In this paper, we present a technique to formally express reward functions in a structured way; this stimulates a proper reward function design and as well enables the formal verification of it. We start by defining the reward function using state machines. In this way, we can statically check that the reward function satisfies certain properties, e.g., high-level requirements of the function to learn. Later we automatically generate a runtime monitor — which runs in parallel with the learning agent — that provides the rewards according to the definition of the state machine and based on the behaviour of the agent. We use the UPPAAL model checker to design the reward model and verify the TCTL properties that model high-level requirements of the reward function and LARVA to monitor and enforce the reward model to the RL agent at runtime.
{"title":"MoVEMo: A Structured Approach for Engineering Reward Functions","authors":"Piergiuseppe Mallozzi, Raúl Pardo, Vincent Duplessis, Patrizio Pelliccione, G. Schneider","doi":"10.1109/IRC.2018.00053","DOIUrl":"https://doi.org/10.1109/IRC.2018.00053","url":null,"abstract":"Reinforcement learning (RL) is a machine learning technique that has been increasingly used in robotic systems. In reinforcement learning, instead of manually pre-program what action to take at each step, we convey the goal a software agent in terms of reward functions. The agent tries different actions in order to maximize a numerical value, i.e. the reward. A misspecified reward function can cause problems such as reward hacking, where the agent finds out ways that maximize the reward without achieving the intended goal. As RL agents become more general and autonomous, the design of reward functions that elicit the desired behaviour in the agent becomes more important and cumbersome. In this paper, we present a technique to formally express reward functions in a structured way; this stimulates a proper reward function design and as well enables the formal verification of it. We start by defining the reward function using state machines. In this way, we can statically check that the reward function satisfies certain properties, e.g., high-level requirements of the function to learn. Later we automatically generate a runtime monitor — which runs in parallel with the learning agent — that provides the rewards according to the definition of the state machine and based on the behaviour of the agent. We use the UPPAAL model checker to design the reward model and verify the TCTL properties that model high-level requirements of the reward function and LARVA to monitor and enforce the reward model to the RL agent at runtime.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127587617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guaranteeing safety in human-robot co-existence often requires a prediction of the volume that could be occupied by the human up to a future time, in order to avoid collisions. Such predictions should be simple and fast for real-time calculation and collision-checking, but account even for unexpected movement. We use a complex biomechanical model to search for extreme human movement, to validate such a prediction. Since the model has a large input space and highly nonlinear dynamics, we use an exploration algorithm based on RRTs to efficiently find the extreme movements. We find that the simple prediction encloses all arm positions found by the exploration algorithm, except where the biomechanical model does not account for collision between body tissue.
{"title":"Reachset Conformance Testing of Human Arms with a Biomechanical Model","authors":"C. Stark, Aaron Pereira, M. Althoff","doi":"10.1109/IRC.2018.00045","DOIUrl":"https://doi.org/10.1109/IRC.2018.00045","url":null,"abstract":"Guaranteeing safety in human-robot co-existence often requires a prediction of the volume that could be occupied by the human up to a future time, in order to avoid collisions. Such predictions should be simple and fast for real-time calculation and collision-checking, but account even for unexpected movement. We use a complex biomechanical model to search for extreme human movement, to validate such a prediction. Since the model has a large input space and highly nonlinear dynamics, we use an exploration algorithm based on RRTs to efficiently find the extreme movements. We find that the simple prediction encloses all arm positions found by the exploration algorithm, except where the biomechanical model does not account for collision between body tissue.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133048490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a Learning From Demonstration (LFD) approach is used to design an autonomous meal-assistant agent. The feeding task is modeled as a mixture of Gaussian distributions. Using the data collected via kinesthetic teaching, the parameters of Gaussian Mixture Model (GMM) are learned using Gaussian Mixture Regression (GMR) and Expectation Maximization (EM) algorithm. Reproduction of feeding trajectories for different environments is obtained by solving a constrained optimization problem. In this method we show that obstacles can be avoided by robot's end-effector by adding a set of extra constraints to the optimization problem. Finally, the performance of the designed meal assistant is evaluated in two feeding scenario experiments: one considering obstacles in the path between the bowl and the mouth and the other without.
{"title":"Implementation of Feeding Task via Learning from Demonstration","authors":"N. Ettehadi, A. Behal","doi":"10.1109/IRC.2018.00058","DOIUrl":"https://doi.org/10.1109/IRC.2018.00058","url":null,"abstract":"In this paper, a Learning From Demonstration (LFD) approach is used to design an autonomous meal-assistant agent. The feeding task is modeled as a mixture of Gaussian distributions. Using the data collected via kinesthetic teaching, the parameters of Gaussian Mixture Model (GMM) are learned using Gaussian Mixture Regression (GMR) and Expectation Maximization (EM) algorithm. Reproduction of feeding trajectories for different environments is obtained by solving a constrained optimization problem. In this method we show that obstacles can be avoided by robot's end-effector by adding a set of extra constraints to the optimization problem. Finally, the performance of the designed meal assistant is evaluated in two feeding scenario experiments: one considering obstacles in the path between the bowl and the mouth and the other without.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133374509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a human object identification by using a simplified fast region-based convolutional network (R-CNN). Human identification is a problem of considerable practical interest. Here, we propose the state-of-the art method which is tested for major pedestrian datasets. Human detection consists of the body part detectors which detect head and shoulder, torso, and pair of legs, with three, two and four different appearances respectively. These detectors are integrated as to identify the human object with different poses. Fast R-CNN is a well-known method for object recognition using deep CNN. Hybrid body part detector demonstrates the merits for partially occluded human detection by integrating the scores of the individual part detectors based on the occlusion map. The highest merging score is the best configuration to evaluate the detection score of the human detector. Experiments on two public datasets (INRIA and Caltech) show the effectiveness of the proposed approach.
{"title":"Human Object Identification for Human-Robot Interaction by Using Fast R-CNN","authors":"Shih-Chung Hsu, Yu-Wen Wang, Chung-Lin Huang","doi":"10.1109/IRC.2018.00043","DOIUrl":"https://doi.org/10.1109/IRC.2018.00043","url":null,"abstract":"This paper proposes a human object identification by using a simplified fast region-based convolutional network (R-CNN). Human identification is a problem of considerable practical interest. Here, we propose the state-of-the art method which is tested for major pedestrian datasets. Human detection consists of the body part detectors which detect head and shoulder, torso, and pair of legs, with three, two and four different appearances respectively. These detectors are integrated as to identify the human object with different poses. Fast R-CNN is a well-known method for object recognition using deep CNN. Hybrid body part detector demonstrates the merits for partially occluded human detection by integrating the scores of the individual part detectors based on the occlusion map. The highest merging score is the best configuration to evaluate the detection score of the human detector. Experiments on two public datasets (INRIA and Caltech) show the effectiveness of the proposed approach.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115496343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. A. G. Ricardez, Yosuke Osaki, Ming Ding, J. Takamatsu, T. Ogasawara
We can expect robots to efficiently perform tasks using appliances in a similar way that humans do. A common approach is to build appliances' models so that robot can operate them but this process is time-consuming. In this paper, we propose a method to estimate the proper operation of appliances using ontology and convolutional neural networks (CNN). We propose to use CNNs to detect the appliances and the operating parts, and then perform an ontology analysis of the operating parts (e.g., buttons) and the appliances to infer the proper operation. This method can be used for appliances which it was not trained for because the dataset has a high generalization due to the inclusion of multiple appliances and the separated training for appliances and operating parts. We experimentally verify the effectiveness of the proposed method with a service robot operating in multi-object environments.
{"title":"Estimating the Operation of Unknown Appliances for Service Robots Using CNN and Ontology","authors":"G. A. G. Ricardez, Yosuke Osaki, Ming Ding, J. Takamatsu, T. Ogasawara","doi":"10.1109/IRC.2018.00039","DOIUrl":"https://doi.org/10.1109/IRC.2018.00039","url":null,"abstract":"We can expect robots to efficiently perform tasks using appliances in a similar way that humans do. A common approach is to build appliances' models so that robot can operate them but this process is time-consuming. In this paper, we propose a method to estimate the proper operation of appliances using ontology and convolutional neural networks (CNN). We propose to use CNNs to detect the appliances and the operating parts, and then perform an ontology analysis of the operating parts (e.g., buttons) and the appliances to infer the proper operation. This method can be used for appliances which it was not trained for because the dataset has a high generalization due to the inclusion of multiple appliances and the separated training for appliances and operating parts. We experimentally verify the effectiveness of the proposed method with a service robot operating in multi-object environments.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121987251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Harin Kim, Yeon Kang, David R. Valencia, Donghan Kim
In this paper, we developed a system to analyze gait patterns by integrating insole type FSR sensors and IMU sensors. Using this, an experiment was conducted to analyze the walking pattern of the pedestrian, and the reliability of the developed system was verified. The developed system extracts six data (roll, pith, yaw, foot height, foot movement distance, weight on FSR sensors) from each sensor. These data can be used to calculate stride lengths and step lengths which is important when analyzing pedestrian walking patterns. Experiments to verify the developed gait system determine the reliability based on the calculated data with the unit stride (0.5m). As a result, it was confirmed that the step length had an error range of ± 7.17% and the stride length had an error range of ± 6.71%.
{"title":"An Integrated System for Gait Analysis Using FSRs and an IMU","authors":"Harin Kim, Yeon Kang, David R. Valencia, Donghan Kim","doi":"10.1109/IRC.2018.00073","DOIUrl":"https://doi.org/10.1109/IRC.2018.00073","url":null,"abstract":"In this paper, we developed a system to analyze gait patterns by integrating insole type FSR sensors and IMU sensors. Using this, an experiment was conducted to analyze the walking pattern of the pedestrian, and the reliability of the developed system was verified. The developed system extracts six data (roll, pith, yaw, foot height, foot movement distance, weight on FSR sensors) from each sensor. These data can be used to calculate stride lengths and step lengths which is important when analyzing pedestrian walking patterns. Experiments to verify the developed gait system determine the reliability based on the calculated data with the unit stride (0.5m). As a result, it was confirmed that the step length had an error range of ± 7.17% and the stride length had an error range of ± 6.71%.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124152634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over the past few years, modeling of continuum robots has been the subject of considerable attention in the research community. In this paper, we compare a set of forward kinematic models developed for continuum robots, with the underlying assumption of piecewise constant curvature. A new approximate kinematic model based on phase and actuator length differences is also introduced for comparison. The comparative evaluation consists of computer simulation and physical experiments on a multisection continuum robotic manipulator, the OctArm. The experiments include both elongation and bending in 3D space. The comparative accuracy of the models is reported, along with relative numerical stability. Further conclusions are drawn on the applicability of the models to different real-world scenarios.
{"title":"A Comparison of Constant Curvature Forward Kinematics for Multisection Continuum Manipulators","authors":"Anant Chawla, Chase G. Frazelle, I. Walker","doi":"10.1109/IRC.2018.00046","DOIUrl":"https://doi.org/10.1109/IRC.2018.00046","url":null,"abstract":"Over the past few years, modeling of continuum robots has been the subject of considerable attention in the research community. In this paper, we compare a set of forward kinematic models developed for continuum robots, with the underlying assumption of piecewise constant curvature. A new approximate kinematic model based on phase and actuator length differences is also introduced for comparison. The comparative evaluation consists of computer simulation and physical experiments on a multisection continuum robotic manipulator, the OctArm. The experiments include both elongation and bending in 3D space. The comparative accuracy of the models is reported, along with relative numerical stability. Further conclusions are drawn on the applicability of the models to different real-world scenarios.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128240261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyeon Cho, Tae Jin Kim, Jae Hong Lee, H. Kim, Jong-Oh Park, Jong Hee Lee, Cheong Lee, Y. Son
Recently, external electromagnetic actuation(EMA) system was introduced to control the locomotion of the capsule endoscopy(CE) using magnetic force. EMA system provides the manual user interface to control the system, but inspectors suffered from fatigues due to the long examination time. We proposed an autonomous driving algorithm for the capsule endoscope. The algorithm searched the target point based on the image processing where the capsule should orient, and the steering was automatically manipulated until the capsule oriented the target point. Then, the propulsion was made until the capsule deviated from the target point. In order to verify the feasibility of the algorithm, simulated endoscopic images were acquired from the commercially available endoscopic capsule by using intestine phantom and a linear and rotation motion stage. The driving simulator was tested on the arc-shaped paths having the various curvatures under the various propulsion forces. In the most conditions, the proposed algorithm succeeded in driving autonomously in the given paths. In some conditions, having a large curvature and a large propulsion, the target point was missed, but scanning algorithm for the missed target point may overcome this problem. In conclusion, the proposed algorithm could be utilized in the active capsule endoscope system and provide the autonomous driving mode in the capsule endoscopy without additional sensors or devices.
{"title":"Simulation Study of Autonomous Drive for Active Capsule Endoscopy","authors":"Hyeon Cho, Tae Jin Kim, Jae Hong Lee, H. Kim, Jong-Oh Park, Jong Hee Lee, Cheong Lee, Y. Son","doi":"10.1109/IRC.2018.00083","DOIUrl":"https://doi.org/10.1109/IRC.2018.00083","url":null,"abstract":"Recently, external electromagnetic actuation(EMA) system was introduced to control the locomotion of the capsule endoscopy(CE) using magnetic force. EMA system provides the manual user interface to control the system, but inspectors suffered from fatigues due to the long examination time. We proposed an autonomous driving algorithm for the capsule endoscope. The algorithm searched the target point based on the image processing where the capsule should orient, and the steering was automatically manipulated until the capsule oriented the target point. Then, the propulsion was made until the capsule deviated from the target point. In order to verify the feasibility of the algorithm, simulated endoscopic images were acquired from the commercially available endoscopic capsule by using intestine phantom and a linear and rotation motion stage. The driving simulator was tested on the arc-shaped paths having the various curvatures under the various propulsion forces. In the most conditions, the proposed algorithm succeeded in driving autonomously in the given paths. In some conditions, having a large curvature and a large propulsion, the target point was missed, but scanning algorithm for the missed target point may overcome this problem. In conclusion, the proposed algorithm could be utilized in the active capsule endoscope system and provide the autonomous driving mode in the capsule endoscopy without additional sensors or devices.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"118 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120904455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}