{"title":"一种用于视频透视增强现实系统中摄像机配准的全局校正框架","authors":"Wenhao Yang, Yunbo Zhang","doi":"10.1115/1.4063350","DOIUrl":null,"url":null,"abstract":"\n Augmented Reality (AR) enhances the user's perception of the real environment by superimposing virtual images generated by computers. These virtual images provide additional visual information that complements the real-world view. AR systems are rapidly gaining popularity in various manufacturing fields such as training, maintenance, assembly, and robot programming. In some AR applications, it is crucial for the invisible virtual environment to be precisely aligned with the physical environment to ensure that human users can accurately perceive the virtual augmentation in conjunction with their real surroundings. The process of achieving this accurate alignment is known as calibration. During some robotics applications using AR, we observed instances of misalignment in the visual representation within the designated workspace. This misalignment can potentially impact the accuracy of the robot's operations during the task. Based on previous research on AR-assisted robot programming systems, this work investigates the sources of misalignment errors and presents a simple and efficient calibration procedure to reduce the misalignment accuracy in general video see-through AR systems. To accurately superimpose virtual information onto the real environment, it is necessary to identify the sources and propagation of errors. In this work, we outline the linear transformation and projection of each point from the virtual world space to the virtual screen coordinates. An offline calibration method is introduced to determine the offset matrix from the Head-Mounted Display (HMD) to the camera, and experiments are conducted to validate the improvement achieved through the calibration process.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":" ","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2023-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Global Correction Framework for Camera Registration in Video See-Through Augmented Reality Systems\",\"authors\":\"Wenhao Yang, Yunbo Zhang\",\"doi\":\"10.1115/1.4063350\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Augmented Reality (AR) enhances the user's perception of the real environment by superimposing virtual images generated by computers. These virtual images provide additional visual information that complements the real-world view. AR systems are rapidly gaining popularity in various manufacturing fields such as training, maintenance, assembly, and robot programming. In some AR applications, it is crucial for the invisible virtual environment to be precisely aligned with the physical environment to ensure that human users can accurately perceive the virtual augmentation in conjunction with their real surroundings. The process of achieving this accurate alignment is known as calibration. During some robotics applications using AR, we observed instances of misalignment in the visual representation within the designated workspace. This misalignment can potentially impact the accuracy of the robot's operations during the task. Based on previous research on AR-assisted robot programming systems, this work investigates the sources of misalignment errors and presents a simple and efficient calibration procedure to reduce the misalignment accuracy in general video see-through AR systems. To accurately superimpose virtual information onto the real environment, it is necessary to identify the sources and propagation of errors. In this work, we outline the linear transformation and projection of each point from the virtual world space to the virtual screen coordinates. An offline calibration method is introduced to determine the offset matrix from the Head-Mounted Display (HMD) to the camera, and experiments are conducted to validate the improvement achieved through the calibration process.\",\"PeriodicalId\":54856,\"journal\":{\"name\":\"Journal of Computing and Information Science in Engineering\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2023-09-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Computing and Information Science in Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1115/1.4063350\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computing and Information Science in Engineering","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1115/1.4063350","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
A Global Correction Framework for Camera Registration in Video See-Through Augmented Reality Systems
Augmented Reality (AR) enhances the user's perception of the real environment by superimposing virtual images generated by computers. These virtual images provide additional visual information that complements the real-world view. AR systems are rapidly gaining popularity in various manufacturing fields such as training, maintenance, assembly, and robot programming. In some AR applications, it is crucial for the invisible virtual environment to be precisely aligned with the physical environment to ensure that human users can accurately perceive the virtual augmentation in conjunction with their real surroundings. The process of achieving this accurate alignment is known as calibration. During some robotics applications using AR, we observed instances of misalignment in the visual representation within the designated workspace. This misalignment can potentially impact the accuracy of the robot's operations during the task. Based on previous research on AR-assisted robot programming systems, this work investigates the sources of misalignment errors and presents a simple and efficient calibration procedure to reduce the misalignment accuracy in general video see-through AR systems. To accurately superimpose virtual information onto the real environment, it is necessary to identify the sources and propagation of errors. In this work, we outline the linear transformation and projection of each point from the virtual world space to the virtual screen coordinates. An offline calibration method is introduced to determine the offset matrix from the Head-Mounted Display (HMD) to the camera, and experiments are conducted to validate the improvement achieved through the calibration process.
期刊介绍:
The ASME Journal of Computing and Information Science in Engineering (JCISE) publishes articles related to Algorithms, Computational Methods, Computing Infrastructure, Computer-Interpretable Representations, Human-Computer Interfaces, Information Science, and/or System Architectures that aim to improve some aspect of product and system lifecycle (e.g., design, manufacturing, operation, maintenance, disposal, recycling etc.). Applications considered in JCISE manuscripts should be relevant to the mechanical engineering discipline. Papers can be focused on fundamental research leading to new methods, or adaptation of existing methods for new applications.
Scope: Advanced Computing Infrastructure; Artificial Intelligence; Big Data and Analytics; Collaborative Design; Computer Aided Design; Computer Aided Engineering; Computer Aided Manufacturing; Computational Foundations for Additive Manufacturing; Computational Foundations for Engineering Optimization; Computational Geometry; Computational Metrology; Computational Synthesis; Conceptual Design; Cybermanufacturing; Cyber Physical Security for Factories; Cyber Physical System Design and Operation; Data-Driven Engineering Applications; Engineering Informatics; Geometric Reasoning; GPU Computing for Design and Manufacturing; Human Computer Interfaces/Interactions; Industrial Internet of Things; Knowledge Engineering; Information Management; Inverse Methods for Engineering Applications; Machine Learning for Engineering Applications; Manufacturing Planning; Manufacturing Automation; Model-based Systems Engineering; Multiphysics Modeling and Simulation; Multiscale Modeling and Simulation; Multidisciplinary Optimization; Physics-Based Simulations; Process Modeling for Engineering Applications; Qualification, Verification and Validation of Computational Models; Symbolic Computing for Engineering Applications; Tolerance Modeling; Topology and Shape Optimization; Virtual and Augmented Reality Environments; Virtual Prototyping