FloorLevel-Net: Recognizing Floor-Level Lines with Height-Attention-Guided Multi-task Learning

Mengyang Wu1
Wei Zeng2
Chi-Wing Fu1
1 The Chinese University of Hong Kong, Hong Kong, China
2 Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China

Code + Dataset [GitHub]
Paper [Paper]

Left column shows two example street-view images in London (top) and Hong Kong (bottom), where the camera views are side- and front-facing relative to the building, respectively. Note the occlusions introduced by the advertisement billboard and light post circled in red on bottom left. Middle column shows floor-level lines recognized by our method with geometric positions and semantic order labels. Right column shows potential floor-aware image-overlay results to aid shopping (top) and navigation (bottom).


The ability to recognize the position and order of the floor-level lines that divide adjacent building floors can benefit many applications, for example, urban augmented reality (AR). This work tackles the problem of locating floor-level lines in street-view images, using a supervised deep learning approach. Unfortunately, very little data is available for training such a network − current street-view datasets contain either semantic annotations that lack geometric attributes, or rectified facades without perspective priors. To address this issue, we first compile a new dataset and develop a new data augmentation scheme to synthesize training samples by harassing (i) the rich semantics of existing rectified facades and (ii) perspective priors of buildings in diverse street views. Next, we design FloorLevel-Net, a multi-task learning network that associates explicit features of building facades and implicit floor-level lines, along with a height-attention mechanism to help enforce a vertical ordering of floor-level lines. The generated segmentations are then passed to a second-stage geometry post-processing to exploit self-constrained geometric priors for plausible and consistent reconstruction of floor-level lines. Quantitative and qualitative evaluations conducted on assorted facades in existing datasets and street views from Google demonstrate the effectiveness of our approach. Also, we present context-aware image overlay results and show the potentials of our approach in enriching AR-related applications.


Our two-stage approach: (i) FloorLevel-Net is a multi-task learning network that segments the input image into building-facade-wise semantic regions (top) and floor-level distributions (bottom); and (ii) our method further fits and refines the pixel-wise network outputs into polylines with geometric parameters. Further, we can take the reconstructed floor-level lines to support and enrich urban AR applications with floor-aware image overlay.

Qualitative comparison results

The results illustrate the effects of (i) our data augmentation scheme by comparing DeeplabV3+ models trained on CMP and on our training data (green background), (ii) multi-task learning with and without the height- attention mechanism (red background), and (iii) our full method further with geometry post-processing (yellow background), vs. the ground truths (GT).

AR overlaying results

The potential of our approach to support and enrich various AR scenarios, e.g., navigation, advertisement, etc.


We thank the street-view images from the Google Street View service. This work is supported partially by the Research Grants Council of the Hong Kong Special Administrative Region (Project no. CUHK 14206320) and Guangdong Basic and Applied Basic Research Foundation (2021A1515011700)