Researchers from the University of California in Los Angeles (UCLA) and the University of North Carolina in Chapel Hill (UNC) have developed tools for automatically detecting natural disasters in images using deep training and computer vision.
Their work was presented at the conference Computer Vision and Pattern Recognition (CVPR) 2023.
The tools, known as Disasternet and DisasterMapper, have been designed to accurately identify and locate various types of natural disasters, including fires, floods, earthquakes, hurricanes, and tsunamis. This technology can play a crucial role in assessing damage, facilitating rescue operations, and aiding in prevention efforts to protect the population.
Disasternet is a neural network that utilizes a vast dataset comprising over 100,000 images from multiple sources, such as satellites, drones, social networks, and news sites. By analyzing this data, Disasternet can classify images based on the presence or absence of a natural disaster. Its accuracy in recognizing 12 different types of natural disasters has been documented at an impressive rate of 94%.
DisasterMapper, on the other hand, is an advanced tool for semantic image segmentation. This technique involves the division of an image into different areas corresponding to distinct objects or categories. By leveraging the capabilities of Disasternet, DisasterMapper generates a segmentation map that precisely identifies the location of a natural disaster within an image. DisasterMapper has demonstrated a notable accuracy rate of 87% in localizing natural disasters.
The researchers have expressed their commitment to continually enhancing the performance of these tools and expanding their application in real-life situations. They are also optimistic that their work will significantly contribute to the field of computer vision for analyzing images of natural disasters, ultimately providing valuable assistance to impacted communities.