Tractable


01.07.2022

1 min read

Engineering_InsightsProperty claimsAI Property

Share on:

GaLeNet: Multimodal Learning for Disaster Prediction, Management and Relief

Accepted to CVPR 2022 for a Workshop on Multimodal Learning for Earth and Environment, here's a quick summary and link to our paper.

Authors:

Rohit Saha, Mengyi Fang, Angeline Yasodhara, Kyryl Truskovskyi, Azin Asgarian, Daniel Homola, Raahil Shah, Frederik Dieleman, Jack Weatheritt, Thomas Rogers

Abstract:

GaLeNet: Multimodal Learning for Disaster Prediction, Management and Relief. After a natural disaster, such as a hurricane, millions are left in need of emergency assistance. To allocate resources optimally, human planners need to accurately analyze data that can flow in large volumes from several sources. This motivates the development of multimodal machine learning frameworks that can integrate multiple data sources and leverage them efficiently. To date, the research community has mainly focused on unimodal reasoning to provide granular assessments of the damage. Moreover, previous studies mostly rely on post-disaster images, which may take several days to become available. In this work, we propose a multimodal framework (GaLeNet) for assessing the severity of damage by complementing pre-disaster images with weather data and the trajectory of the hurricane. Through extensive experiments on data from two hurricanes, we demonstrate (i) the merits of multimodal approaches compared to unimodal methods, and (ii) the effectiveness of GaLeNet at fusing various modalities. Furthermore, we show that GaLeNet can leverage pre-disaster images in the absence of post-disaster images, preventing substantial delays in decision making.

Tap or click here to read paper

Discover more related content