Skip to main content

Momentum

Search and Rescue

When disasters happen – whether a natural disaster like a flood or earthquake, or a human-caused one like a bombing – it can be extremely dangerous to send first responders in, even though there are people who badly need help.

Drones can be useful in such situations, but most require individual pilots to fly the unmanned aircraft by remote control. That limits how quickly rescuers can view an entire affected area, and can delay actual aid from reaching victims.

“Autonomous drones could cover more ground more quickly but would only be more effective if they were able to independently help rescuers identify people in need,” said Vijayan Asari, professor of electrical and computer engineering. “At the University of Dayton Vision Lab, we are working on developing systems that can help spot people or animals – especially ones who might be trapped by fallen debris. Our technology mimics the behavior of a human rescuer, looking briefly at wide areas and quickly choosing specific regions to focus in on, to examine more closely.”

Disaster areas are often cluttered with downed trees, collapsed buildings, torn-up roads and other disarray that can make spotting victims in need of rescue very difficult. Asari’s team has developed an artificial neural network system that can run in a computer onboard a drone and can emulate some of the ways human vision works. It analyzes images captured by the drone’s camera and communicates notable findings to human supervisors. 

“First, our system processes the images to improve their clarity,” Asari explained. “Just as humans squint their eyes to adjust their focus, our technologies take detailed estimates of darker regions in a scene and computationally lighten the images. When images are too hazy or foggy, the system recognizes they’re too bright and reduces the whiteness of the image to see the actual scene more clearly.” 

The system can also make other adjustments that mimic strategies used by the human brain. In a rainy environment, for example, humans take note of the parts of a scene that don’t change and the ones that do, such as raindrops. Asari’s technology uses the same strategy, continuously investigating the contents of each location in a sequence of images to get clear information about the objects in that location.

In addition, the system is intelligent enough to extrapolate what it sees. For example, it can identify people in various positions, such as lying prone or curled in the fetal position, even from different viewing angles and in varying lighting conditions. It can also detect and locate parts of an object: a leg sticking out from under rubble, a hand waving at a distance or a head popping up above a pile of wooden blocks.

“During its initial scan of the landscape, our system examines the ground to find possible objects of interest or regions worth further examination. Then our system investigates each selected region to obtain information about the shape, structure and texture of objects there. When it detects a set of features that matches a human being or part of a human, it flags that as a location of a victim,” said Asari.

The drone also collects GPS data about its location, and senses how far it is from other objects it’s photographing. That information lets the system calculate exactly the location of each person needing assistance, and alert rescuers.

The entire process – capturing an image, processing it for maximum visibility and analyzing it to identify people who might be trapped or concealed – takes about one-fifth of a second on a standard laptop computer carried by the drone.

 This article is adapted from a piece that originally appeared in The Conversation.