to download the project base paper of orthographic views project.


We provide projects of Advance Deep Learning. In this paper, we tackle a long-standing problem in
computer-aided design (CAD), namely 3D object reconstruction from three orthographic views. In today’s product design and manufacturing industry, 2D engineering drawings are commonly used by designers to realize, update, and share their ideas, especially during the initial design stages. But to enable further analysis (e.g., finite element analysis) and manufacturing, these 2D designs must be manually realized as 3D models in CAD software. Therefore, if a method can automatically convert the 2D drawings into 3D models, it would greatly facilitate the design process and improve overall efficiency. As the most popular way to describe an object in 2D drawings, an orthographic view is the projection of the object onto the plane that is perpendicular to one of the three
principal axes perpendicular to one of the three principal axes. Over the past few decades, 3D
reconstruction from three orthographic views has been extensively studied, with significant improvements in terms of the types of applicable objects and computational efficiency.

We develop a new method of orthographic views to automatically convert 2D line drawings from three orthographic views into 3D CAD models. Existing methods for this problem reconstruct 3D models by back-projecting the 2D observations into 3D space while maintaining explicit correspondence between the input and output. Such methods are sensitive to errors and noises in the input, thus often fail in practice where the input drawings created by human designers are imperfect. To overcome this difficulty, we leverage the attention mechanism in a Transformer-based sequence generation model to learn flexible mappings between the input and output. Further, we design shape programs which are suitable for generating the objects of interest to boost the reconstruction accuracy and facilitate CAD modeling applications. Experiments on a new benchmark dataset show that our method significantly outperforms existing ones when the inputs are noisy or incomplete.


Leave a Comment


No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *