Dense Labeling of Remote Sensing Images in the Wild

Automatic geographic mapping using Remote Sensing Images (RSIs) as a data source is usually modeled as a supervised classification problem. In this context, dense pixel labeling also called semantic segmentation or pixel-wise classification is a computer vision task that has made great strides in recent years mainly due to the emergence of new approaches based on deep convolutional networks. Remote sensing applications have also benefited from these advances. Several studies have been noted for the high level of quality obtained in the creation of geographic maps in an automated way through the use of semantic segmentation techniques. An important issue, however, is that the advances shown are generally evaluated in relatively well-controlled environments. A number of challenges emerge when these approaches are employed on more specific applications, such as class imbalance, underrepresentation of some classes, and presence of pixels of unknown classes during the prediction phase. In the case of geographic mapping by means of remote sensing images, there are also problems of geographic and temporal domain shift. In addition, sample annotation depends on expert users, imposing restrictions on the volume of annotated data available. In this project, we will address the challenges for the effective use of supervised learning in dense pixel labeling through the study and development of new approaches to increase the robustness of the models to these restrictions. The effectiveness and suitability of the proposed methods will be evaluated in two main applications: detection of rural roads in the Amazon rainforest and Cerrado savanna; and monitoring of urban housing conditions and their relationship with outbreaks of Dengue disease.