Map construction is the initial step of mobile robots for their localization, navigation, and path planning in unknown environments. Considering the human-robot collaboration (HRC) scenarios in modern manufacturing, where the human workers’ capabilities are closely integrated with the efficiency and precision of robots in the same workspace, a map integrating geometric and semantic information is considered as the technical foundation for intelligent interactions between human workers and robots, such as motion planning, reasoning, and context-aware decision-making. Although different map construction methods have been proposed for mobile robots’ perception in the working environment, it is still a challenging task when applied to such human-robot collaborative manufacturing scenarios to achieve the afore-mentioned intelligent interactions between human workers and robots due to the poor integration of semantic information in the constructed map. On the one hand, due to the lack of ability for differentiating the dynamic objects, the mobile robot might sometimes wrongly use the dynamic objects as the spatial references to calculate the pose transformation between the two successive frames, which negatively affects the accuracy of the robot's localization and pose estimation. On the other hand, the map that integrates both the geometric and semantic information can hardly be constructed in real-time, which cannot provide an effective support for the real-time reasoning and decision making during the human-robot collaboration process.
This study proposes a novel map construction approach containing semantic information generation, geometric information generation, and semantic & geometric information fusion modules, which enables the integration of the semantic and geometric information in the constructed map. First, the semantic information generation module analyzes the captured images of the dynamic working environment, eliminates the features of dynamic objects, and generates the semantic information of the static objects. Meanwhile, the geometric information generation module is adopted to generate the accurate geometric information of the robot's motion plane by using the environment data. Finally, a map integrating semantic and geometric information in real-time can be constructed by the semantic & geometric fusion module. The experimental results demonstrate the effectiveness of the proposed semantic map construction approach.