Skip to main content

The opportunity to work on tangible, substantial and real projects that matter

From making our data more secure to helping First Responders save lives, we are on the vanguard of developing solutions for a changing world. Our work has the potential to radically alter everything from how search-and-rescue operations are conducted in urban disaster zones, to how we search for information online. And while we have built our program on a strong theoretical base, it is the real-world application of our research that we find the most exciting aspect of what we do. By looking at problems through a computer science research lens, we can develop systems and technologies that provide relief to those most in need. We face complex issues and new threats, and our students emerge well-equipped to respond and make an impact on local, national and international levels.

Abel_2501

Disaster Relief & Urban Search and Rescue using Gaming Software

When disaster strikes, a few minutes can mean the difference between life and death. But deciding which resources to deploy and where to focus search efforts can be a time-consuming process. Dr. Alex Ferworn's lab, the Network-Centric Applied Research Team (N-CART), works to improve computational public safety. Computational public safety (CPS) involves the application of computational resources, theory and practice to support and improve public safety processes.

For example, their Disaster Scene Reconstruction (DSR) project gathers data through a complex optical sensor carried by an unmanned aerial vehicle and soon dogs. They then optimize this data to use within a game engine with physics models applied to artifacts within the model. Their goal? To provide emergency first responders with a tool for modelling a disaster in real time, as it unfolds. This way, they can develop the most effective strategy for tackling the issue without taking needless risks or taxing resources that don't need to be used.

Their work also focuses on automatically detecting “holes” within piles of rubble – holes where survivors could potentially be found. This research can assist First Responders in prioritizing how they conduct their searches, allowing them to focus on areas with the greatest potential for rescue. Ferworn has partnered with a wide range of emergency response organizations in Canada and the United States, including the U.S. Department of Homeland Security and the Federal Emergency Management Agency (FEMA) to develop this life-saving technology. In the wake of both natural and manmade disasters, relief efforts must be delivered swiftly and precisely, targeting the individuals who require immediate medical care. A human being can survive for two days in a rubble pile, so “if we can develop techniques and algorithms that speed up the search and reduce the likelihood of creating a secondary collapse," says Ferworn, "we consider our work successful.”

 

Building Computers That See using video analysis technology

Professor Dr. Konstantinos Derpanis is utilizing machine vision techniques to recover useful information from images: essentially developing computers that 'see'. Supported by NSERC Discovery and Engage grants, Derpanis’ lab, the Ryerson Vision Lab (RVL), is broadening the scope of how we understand video as it relates to the world around us.

“We have witnessed a deluge of video content due to advances in computing power and networking technologies,” says Derpanis. “YouTube, for instance, reports that 65 hours of video are uploaded to their service every minute. Yet most current solutions rely on humans visually inspecting videos to extract meaning.”

Derpanis is developing two automated solutions to mine videos for information. The first, geometric, involves the recovery of the three-dimensional layout of a surrounding environment. The second, semantic, is about interpreting what people are doing in such an environment. The potential for these solutions can be applied in the practice of video indexing and browsing, intelligent surveillance and human–computer interfaces. “Humans are ubiquitous in many video types and their actions represent a prime source of information for scene understanding,” says Derpanis.

His team has found automated solutions for both spotting and classifying human actions in videos. This research has the potential to develop into a comprehensive online video-searching system – one that doesn't need to rely on painstaking human labor to sift through the information. If computers can see, then we can create better ways of retrieving and classifying documents – affecting everything from filing to getting rid of junk mail.