Traversability Mapping in Post-Flood Environment

  • The increased frequency of floods due to global warming has posed formidable challenges for rescue operations worldwide. Standard mapping tools, including Google Maps and OpenStreetMap, are infeasible for post-flood scenarios due to the destruction of known structures and road networks. While response teams often use satellite and drone imagery to aid flood relief efforts, they typically lack the ability to detect underwater obstacles, making it unsafe for rescue boats to navigate. Furthermore, the complex nature of post-flood environments, such as highly variable depth, random unstructured obstacles, and extremely turbid water due to sand particles, increases the need for robust environment perception and mapping systems tailored to the complexities of post-flood environments. Additionally, a significant challenge in developing such a system arises from the unavailability of comprehensive datasets about flooded environments, a limitation that has constrained previous research efforts. The primary objective of this thesis is to provide a robust surface and underwater perception system that uses rich multi-modal sensory knowledge and provides traversability information for safe navigation. This perception system involves comprehensive understanding of obstacle's characteristics and categorizing their threat levels by integrating surface and underwater sensory data. Additionally, this research attempts to devise a versatile system capable of seamless reconfiguration across various surface water vehicles. This adaptability benefits rescue teams equipped with boats featuring diverse kinematics and motion models. Consequently, this thesis proposed a novel Shallow Water traversabIlity Mapping (SWiM) architecture, which integrates multiple sensory modalities to create a lightweight 2.5-dimensional traversability map covering both surface and underwater modalities. Enhancements in obstacle detection within the underwater environment, utilizing low Signal-to-Noise Ratio (SNR) sonar imagery, are achieved through dedicated image enhancement and depth estimation modules. Concurrently, accurate object distinction from water is facilitated by deploying various deep-learning-based object detection and segmentation techniques on camera images. The fusion of camera and LiDAR data through inverse-perspective mapping enhances the certainty of obstacle detection. By combining obstacle maps from both modalities, the system can compute essential features of obstacles, including their threat level and whether they are floating or sinking. System validation encompasses a diverse data bank comprising data from state-of-the-art datasets and novel multi-modal MASTER dataset, captured on three distinct boats in various water bodies. Additionally, a comprehensive post-flood simulation is presented using generative adversarial networks (GANs) to replicate realistic sensory noise models, enabling rigorous testing in complex scenarios.

Download full text files

Export metadata

Metadaten
Author:Hannan Ejaz KeenORCiD
URN:urn:nbn:de:hbz:386-kluedo-83477
DOI:https://doi.org/10.26204/KLUEDO/8347
Advisor:Karsten BernsORCiD
Document Type:Doctoral Thesis
Cumulative document:No
Language of publication:English
Date of Publication (online):2024/08/07
Year of first Publication:2024
Publishing Institution:Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau
Granting Institution:Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau
Acceptance Date of the Thesis:2024/07/01
Date of the Publication (Server):2024/08/08
Tag:Autonomous Surface Vehicle; FLS; Flood; Forward-looking Sonar; LiDAR; Mapping; USV
Page Number:194
Faculties / Organisational entities:Kaiserslautern - Fachbereich Informatik
DDC-Cassification:0 Allgemeines, Informatik, Informationswissenschaft / 004 Informatik
Licence (German):Creative Commons 4.0 - Namensnennung (CC BY 4.0)