Digitization of existing buildings with arbitrary shaped spaces from point clouds

Viktor Drobnyi*, Shuyan Li*, Ioannis K. Brilakis

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Downloads (Pure)

Abstract

Digital twins for buildings can significantly reduce building operation costs. However, existing methods for constructing geometric digital twins fail to model the complex geometry of indoor environments. To address this problem, this paper proposes a novel method for digitizing building geometry with arbitrary shapes of spaces by detecting empty regions in point clouds and then expanding them to occupy the entire indoor space. The detected spaces are then used to detect structural objects and transition between spaces, such as doors, without assuming their geometric properties. The method reconstructs the volumetric representation of individual spaces, detects walls, windows and doors between them and splits the point cloud data (PCD) into point clusters of individual spaces from large-scale cluttered PCDs of complex environments. We conduct extensive experiments on Stanford 3D Indoor Spaces data set (S3DIS) and TUMCMS data sets and show that the proposed method outperforms existing methods for digitizing Manhattan-world buildings. In contrast to existing approaches, the method allows digitizing buildings with arbitrarily shaped spaces, including complex layouts, nonflat, nonvertical walls, and nonflat, nonhorizontal floors and ceilings.
Original languageEnglish
Article number04024027
Number of pages12
JournalJournal of Computing in Civil Engineering
Volume38
Issue number5
Early online date12 Jul 2024
DOIs
Publication statusPublished - 01 Sept 2024
Externally publishedYes

Publications and Copyright Policy

This work is licensed under Queen’s Research Publications and Copyright Policy.

Fingerprint

Dive into the research topics of 'Digitization of existing buildings with arbitrary shaped spaces from point clouds'. Together they form a unique fingerprint.

Cite this