English version of this page

MapAI: Precision in Building Segmentation

MapAI: Precision in Building Segmentation is a competition arranged with the Norwegian Artificial Intelligence Research Consortium (NORA) in collaboration with Centre for Artificial Intelligence Research at the University of Agder (CAIR), the Norwegian Mapping Authority, AI:Hub, Norkart, and The Danish Agency for Data Supply and Infrastructure. We propose two different tasks to segment buildings, where the first task can only utilize aerial images, while the second must use laser data (LiDAR) with or without aerial images.

Registration for this competition is closed

Award

The prizes will be 1200 euros for first place, 500 euros for second place, and 300 Euros for third place.

  1. 1200 Euro
  2. 500 Euro
  3. 300 Euro

The winning team will be announced during the Northern Lights Deep Learning conference hosted on the 10-12. January 2023 at UiT The Arctic University of Norway.

We encourage you to register for the Northern Lights Deep Learning conference if you are based in the Nordic region. 

The competition prizes are sponsored by the AI:Hub and The Norwegian Mapping Authority.

Important Dates:

  • Development dataset release: 21st of September, 2022

  • Participant's submission of results: 5th of December, 2022

  • Evaluation results for participants: 12th of December, 2022

  • Methods description paper submission: 22th of December, 2022

Registration

Register for the competition through this Github repository. More details can be found in the README.md. 

Motivation

Buildings are essential to information regarding population, policy-making, and city management. Using computer vision technologies such as classification, object detection, and segmentation has proved helpful in several scenarios, such as urban planning and disaster recovery. Segmentation is the most precise method and can give detailed insights into the data as it highlights the area of interest.

Acquiring accurate segmentation masks of buildings is challenging since the training data derives from real-world photographs. As a result, the data often have varying quality, large class imbalance, and contains noise in different forms. The segmentation masks are affected by optical issues such as shadows, reflections, and perspectives. Additionally, trees, powerlines, or even other buildings may obstruct visibility. Furthermore, small buildings have proved to be more difficult to segment than larger ones as they are harder to detect, more prone to being obstructed, and often confused with other classes. Lastly, different buildings are found in several diverse areas, ranging from rural to urban locations. The diversity poses a vital requirement for the model to generalize to the various combinations.

The participants will be invited to submit to the following two tasks:

Task 1: Aerial Image Segmentation Task

The aerial image segmentation task aims to solve the segmentation of buildings only using aerial images. Segmentation using only aerial images is helpful for several scenarios, including disaster recovery in remote sensing images where laser data is unavailable. We ask the participants to develop machine learning models for generating accurate segmentation masks of buildings solely using aerial images.

Task 2: Laser Data Segmentation Task

The laser data segmentation task aims to solve the segmentation of buildings using laser data. Segmentation using laser data is helpful for urban planning or change detection scenarios, where precision is essential. We ask the participants to develop machine learning models for generating accurate segmentation masks of buildings using laser data with or without aerial images.

To compete for the prize money, both tasks are mandatory. However, submissions for only one sub-task are allowed but will not be eligible for winning any prizes.

Submission guidelines

Data

For the competition, we provide the participants with a dataset containing aerial images, laser data, and ground truths for the buildings. We split the dataset into a training dataset and a test dataset. The training dataset is released at the start of the competition, while the test dataset will be kept hidden until the competition is over. When the competition is complete, we will release the full dataset.

The training dataset consists of several different locations in Denmark. Area variability ensures a diverse dataset with several different environments and building types. The test dataset consists of seven different locations in Norway, varying between large cities and more rural areas in Norway. 

The data used is derived from real-world data. As a result, there are cases where the buildings in the aerial image do not correspond to a ground truth mask. In addition, the ground truths are generated using a DTM, which will skew the top of the buildings in images compared to the ground truths.

For access to dataset via Huggingface, please click here

Evaluation Methodology

Both tasks will be evaluated using Intersection-over-Union and Boundary Intersection-over-Union. The total score is calculated as an average across both tasks. Each task is evaluated using the equation below.

\(S_{Task} = {BIoU + IoU\over 2}\)

The final score is thereby calculated using the equation below

\(S = {S_{Task 1} + S_{Task 2}\over2}\)

Competition Proceedings

All participants are asked to submit a 2 paper (double column, plus 1 additional page for references) describing their method and results. The submitted papers will be reviewed single blind and will be published. Outstanding submissions will be invited to submit a full length paper to a special issue about the competition in the Nordic Machine Intelligence Journal.

List of Task Organizers

  • Sander Jyhne, the Norwegian Mapping Authority and CAIR
  • Per-Arne Andersen, CAIR
  • Morten Goodwin, CAIR
  • Karianne Ormseth, AI:Hub
  • Ivar Oveland, the Norwegian Mapping Authority
  • Alexander Salveson Nossum, Norkart
  • Mathilde Ørstavik, Norkart
  • Andrew C. Flatman, The Danish Agency for Data Supply and Infrastructure

For more information about the competition, please email Sander Jyhne at sander.jyhne@kartverket.no or raise an issue on the Github repository.

Publisert 8. feb. 2024 10:39 - Sist endret 12. feb. 2024 12:53