Monitoring Brain Activity Outside the Lab and Beyond Traditional Limitations
Ryerson Dept. of Computer Science alumnus Salah Sharieh is now a senior technical Innovator with more than 20 years of experience in business and technology. After completing his Master’s degree in Computer Science at Ryerson University (our first graduate), he continued his studies at McMaster University where he completed his Doctor of Philosophy degree in Computer Science.
During his study at Ryerson University, he broadened his horizon in research and combined it with his practical experience. He was able to conduct interdisciplinary research on brain spectroscopy and wireless networks. He developed a mobile solution framework to monitor human brain functions during real-life activities. His framework utilizes the internet, GSM wireless networks, Bluetooth technology and a number of data protocols. It consists of three main parts: a Bluetooth portable near-infrared light sensor; a personal digital assistant (PDA) and a personal computer (PC). The real-time data acquisition is performed by the sensor while mobility is provided by the GSM PDA. The data is sent over a various protocol stack until it reaches the final destination (the host PC). The system provides a powerful light-weight human-brain-function monitoring system in real-life situations outside a lab environment. Several software components have been developed to achieve the integration of all these technologies and devices. He did this to study how people quit smoking.
Salah was drawn to our state-of-the-art downtown campus and first-rate professors who provide great support and mentorship to each student. Furthermore, Ryerson’s culture provides opportunities to build real-life applications that can be used in real enterprises.
Salah has more than twenty peer reviewed publications and has contributed to several books. He also is a technical reviewer for several journals and conferences and is a member of the CIO association of Canada, IEEE and ACM. Recently Salah led the National Occupation Standards for Cyber Security.
He has been appointed an adjunct faculty member at Ryerson and currently teaches in both the G. Raymond Chang School of Continuing Education in the part-time undergraduate computer science degree program and in the Ted Rogers School of Management’s graduate program. He continues his research collaboration with several computer science faculty members.
Working on 360 Degree Video with 2D and Photography Technology
Scott Herman is a Software Developer/Designer with a Masters of Computer Science from Ryerson. He currently holds the position of Chief Technology Officer at DEEP Inc.
"Ryerson was a great fit for me as I wanted to learn about a range of subjects while performing research. When I arrived in 2011 I was pleased to find out my expectation met reality! With a wide selection of advanced courses being available and the ability to perform hands on research, I had a positive graduate school experience. Also, after arriving at Ryerson I found the Professors and Administration staff in the Computer Science department were very helpful if I ever had a question or curiosity needing to be answered. I was always met with an open door and smile."
By taking advanced classes ranging from Genetic Programming to Computer Vision, Herman was able to attain invaluable knowledge and insights into a variety of subjects. In addition, the lab he was a part of, NCART, dealt with the Ontario Provincial Police’s Urban Search and Rescue team where he directly interacted with people who would benefit from his research and allowed him to understand the process of building software from requirements gathering to actual software development.
His thesis was based on the development and creation of 3D models of real-world disaster sites which are later inspected by a simulation game engine. First Responders are able to use the simulation to be immersed in an accurate virtual model of a disaster site to determine the best points of egress in and out of the area from the safety of a command post. He published multiple papers and presented at conferences related to his thesis research and findings.
After graduating he became the lead developer for a 360 Video/Virtual Reality company called DEEP Inc. In that capacity, he developed software that facilitated the seamless combination of 360 video with traditional forms of media that included flat film and photographs. This technique was then used for the award winning Polar Sea 360 interactive documentary. Today, he is leading a team of developers as Chief Technology Officer at DEEP to create the next-generation of 360 Video and Virtual Reality software tools. They have recognized the difficulty when creating narrative VR content and are hoping to make that easier in the future.
His advice to students? "Take classes on subjects you know nothing about! You will be surprised what you can learn and will find that the knowledge can be applied to a variety of scenarios."
Detecting Objects and their Actions for Predictable Outcomes
Dr. Konstantinos Derpanis is an expert in “computer vision”, or using computer technology to recover useful information from images. He is the director of the Ryerson Vision Lab (see ‘The Visionary: Building Computers That See’) and his current research has explored 3D Human Pose Estimation from Monocular Video, the development of a new deep convolutional neural network (DCNN) architecture that learns pixel embeddings, and a new state-of-the-art for document image classification and retrieval, using features learned by deep convolutional neural networks (CNNs). He received the Faculty of Science Dean's Teach Award in December and was promoted to Associate Professor at Ryerson University Dept. of Computer Science in the month prior. He received his Ph.D. and M.Sc. in Computer Science from York University.
He has conducted research with a United States Army Research Laboratory (ARL) funded project entitled “Robotics Collaborative Technology Alliance (RCTA)”, which focused on action detection and understanding as well as object classification, detection and reasoning. He also worked on a Defense Advanced Research Projects Agency (DARPA) funded project entitled “Autonomous Robotic Manipulation (ARM)”, which focused on the development of software that enables autonomous robotic visual perception, grasping and manipulation of objects.
With the support of NSERC Discovery and Engage grants, Derpanis’ lab is investigating modelling and understanding human actions in videos.“Humans are ubiquitous in many video types, and their actions represent a prime source of information for scene understanding.” He’s working on automated solutions for both spotting actions in videos (identifying where and when a particular action is being performed) and then classifying those actions. Derpanis believes this could lay the groundwork for a comprehensive online video-searching system.