This paper proposes a deep learning-based intelligent modeling framework for generating 3D architectural models from manual sketches, addressing the domain gap in 2D-to-3D transformation. By integrating architectural domain knowledge—specifically the phased, selective, and cyclic characteristics of the design process—the framework ensures a structured and iterative generative approach. The framework consists of a 2D design phase, where image retrieval, Stable Diffusion, and CycleGAN facilitate conceptual exploration, multi-scheme generation, and depth map extraction, and a 3D design phase, where Pixel2Mesh generates 3D forms, refined through Grasshopper-based parametric optimization. Empirical evaluation demonstrates that the framework effectively preserves structural fidelity while allowing for generative variations. Structural similarity and geometric accuracy metrics validate its performance, confirming its ability to balance AI-driven massing generation with architectural precision. A Mars habitat case study, conducted in an academic research setting, serves as a controlled experiment to assess adaptability. While demonstrating the framework's potential for AI-assisted architectural generation, the study also highlights the need for broader validation across diverse architectural typologies. This research bridges traditional and AI-driven design methodologies by integrating computer vision and generative models into architectural workflows. The proposed framework contributes to architectural design by introducing a cross-disciplinary approach that enhances the efficiency, quality, and innovation of design processes.
扫码关注我们
求助内容:
应助结果提醒方式:
