Opportunistic Sensing for Object and Activity Recognition from Multi-Modal, Multi-Platform Data

Security applications, including homeland security and sousveillance applications, suffer from the problem of data deluge. It’s easy to record data; it’s very hard to find anybody who actually has the time to see or hear all of the data. The goal of this research is to develop systems that can automatically reconfigure themselves as necessary to track potentially interesting people, things, or events.

This project was funded by ARO MURI 31, 2009-2014, and is a collaboration among researchers from Rice University, the University of Maryland, the University of Illinois, Duke, UCLA, and Yale. Many of our publications have been written in collaboration with the US Army Research Labs. For more information, consult the menus at the top of this page, or follow any of these links:

People

This project was funded by ARO MURI 31, 2009-2014, and is a collaboration among researchers from Rice University, the University of Maryland, the University of Illinois, Duke, UCLA, and Yale. Many of our publications have been written in collaboration with the US Army Research Labs. Only University of Illinois personnel are listed on this page.

  • Investigators
    • Tamer Ba┼čar
    • Mark Hasegawa-Johnson
    • Thomas Huang
  • Post-Docs
    • Sourabh Bhattacharya
    • Jianchao Yang
  • Graduate Students
    • Po-Sen Huang
    • Zhaowen Wang

Research

From the BAA: “Opportunistic sensing refers to a paradigm for signal and information processing in which a sensing system can automatically discover and select sensor modalities and sensing platforms based on an operational scenario, determine the appropriate set of features and optimal means for data collection based on these features, obtain missing information by querying resources available, and use appropriate methods to fuse the data, resulting in an adaptive network that automatically finds scenario-dependent, objective-driven opportunities with optimized performance.”

The goal of audiovisual and multimodal opportunistic sensing research is to develop a practical version of multimodal fusion based on joint manifold modeling, concatenating not raw data but features extracted from the data. We have modeled the joint distribution of the events and objects observed from different viewpoints and from different modalities, by projecting them onto a common probabilistic formalism designed to discriminate between presence and absence of named target classes. Sources that can be physically modeled (e.g., uncorrelated noise and some types of reverberation) have been removed in the initial feature extraction algorithms; sources of variability that can be modeled probabilistically have been compensated in the probabilistic models (e.g., using coupled hidden Markov models for audiovisual fusion); sources of variability for which we have training data have been compensated using supervector compensation techniques, and, finally, noise robustness is built into the manifold extraction algorithm itself by the use of generalization error bounds based on the Vapnik-Chervonenkis dimension of the projection algorithm. By compensating noise early and often, it becomes possible to reliably learn a projection from the training data onto a discriminative audiovisual event manifold.