Macro-photogrammetry using Structure from Motion (SfM) is widely used in museums and biorepositories to create high-resolution 3D models for educational outreach and proxy specimen access. These models can enable a wide range of analysis, including serving as a latent source of training data for species inventory approaches that leverage machine learning for automated interpretation of 2D imagery. To assess the potential for these models to generate accurate 2D representations, this study investigates how the resulting 3D models' geometric and texture accuracy is impacted by key photogrammetric parameters, specifically horizontal image overlap, vertical image overlap, and focus stacking, emphasizing the impacts on rendered 2D images. In our study, we find that focus stacking, which is commonly assumed to enhance accuracy, provides limited to no benefits for machine learning purposes when compared to an original image. Ten ground beetle specimens of varying shapes, sizes, and colors, including beetles with iridescent carapaces, were modeled using SfM. From these models, both geometric and texture (external coloration and patterning) accuracy were quantitatively assessed. Geometric accuracy was evaluated using the mean absolute error (MAE), comparing modeled and actual specimen measurements across five morphometric features (elytra length, the elytra width, the lengths of the second and third antenna segments, and the length of the first and second tibia). The accuracy of the 2D render texture was measured using % RGB similarity and structural similarity index measure (SSIM) comparing the original images to rendered images. Focus stacking has traditionally been considered essential for improving overall image sharpness within 3D specimen models, however it represents a major bottleneck in the 3D digitization workflow as it requires multiple images per view angle and requires a considerable amount of time and resources to successfully create a 3D model. Results from this study indicate that not only are these methods unnecessary but they lose image fidelity when renders are compared to their original 2D image. Image capture parameters such as horizontal and vertical overlap were found to have a significant impact on model geometric accuracy and rendered 2D image quality, with the overall best-performing method in terms of geometric and texture accuracy as well as model creation consistency, using non-focus-stacked images captured at five vertical angles, with a 20° rotation between each vertical imaging line and 11.25° horizontally between scans. These results challenge the necessity of focus stacking as a blanket, best practice for all use cases in contemporary 3D modeling workflows, indicating that forgoing focus stacking is a potentially beneficial and resource-efficient method of rendering 3D specimen models intended to produce accurate 2D renderings such as those used as training data for deep-learning algorithms.
扫码关注我们
求助内容:
应助结果提醒方式:
