Statistical Speech Technology

Nancy McElwain, Mark Hasegawa-Johnson, Bashima Islam, Brandon Meline, Maria Davila, & Keri Heilman. Validation of a Virtual Still Face Procedure and Deep Learning Algorithms to Assess Infant Emotion Regulation and Infant-Caregiver Interactions in the Wild. no. 1 R01 DA059422-01, National Institute on Drug Abuse, 2024–2029

Nancy McElwain, Mark Hasegawa-Johnson, & Bashima Islam. Automated Assessment of Infant Sleep/Wake States, Physical Activity, and Household Noise Using a Multimodal Wearable Device and Deep Learning Models. no. 1 R01 DK138866-01, National Institute on Diabetes and Digestive and Kidney Diseases, 2024–2029


Venu Govindaraju, Jinjun Xiong, Srirangaraj Setlur, Pamela Hadley, Julie Kientz, David Feil-Seifer, Brian Graham, Megan-Brette Hamilton, Anil K Jain, Ye Jia, Jennifer Taps Richard, Tracy A Sawicki, Alison Hendricks, Ifeoma Nwogu, Letitia Thomas, Christine Wang, Wenyao Xu, Maneesh Agrawala, Diego Aguirre, …, Shaofeng Zou. AI Institute for Transforming Education for Children with Speech and Language Processing Challenges. no. 2229873, NSF DRL Division Of Research On Learning, 2023–2028


Mark Hasegawa-Johnson, Zsuzsanna Fagyal, Najim Dehak, Piotr Zelasko, & Laureano Moro-Velazquez. FAI: A New Paradigm for the Evaluation and Training of Inclusive Automatic Speech Recognition. no. 2147350, NSF IIS Division of Information and Intelligent Systems, 2022–2026

Inha University, Ewha Women’s University, & University of Illinois at Urbana-Champaign. Workshop on Assistive and Inclusive Technology for Digital Accessibility. Feb, 2025

Changdong Yoo, & Mark Hasegawa-Johnson. Deep F-measure Maximization for Fairness in Speech Understanding. no. 507691, IITP Institute for Information & Communications Technology Promotion, Korea, 2020–2025

Mark Hasegawa-Johnson. Unrestricted gift. no. 638245, Picsart Corporation, 2022–2025


Mark Hasegawa-Johnson, Heejin Kim, Clarion Mendes, Meg Dickinson, Erik Hege, & Chris Zwilling. Speech Accessibility Project. no. 548308-392030, AI Accessibility Coalition (Amazon, Apple, Google, Meta and Microsoft), 2022–2025


Nancy McElwain, Mark Hasegawa-Johnson, Dominika M Pindus, Bashima Islam and Charles Davies, & Maria Davila. Continuous Capture of Infant Feeding, Motor Activity, and Sleep Biorhythms via an Infant Wearable Platform and Machine Learning Approaches. no. 2, University of Illinois Personalized Nutrition Initiative, 2022–2023

Amit Juneja, Mark Hasegawa-Johnson, & James Mireles. STTR Phase II Proposal F2-15825 Virtual Reality Visualization of Complex and Unstructured Data and Relationships. no. F2-15825, USAF United States Air Force, 2022–2023

Najim Dehak, & Mark Hasegawa-Johnson. RI: Small: Collaborative Research: Automatic Creation of New Phone Inventories. no. 19-10319, NSF IIS Division of Information and Intelligent Systems, 2019–2022

Narendra Ahuja, Mark Hasegawa-Johnson, David Beiser, & David Chestek. Adding Audio-Visual Cues to Signs and Symptoms for Triaging Suspected or Diagnosed COVID-19 Patients. no. 23, C3.ai Digital Transformation Institute, 2020–2022

Amit Juneja, Mark Hasegawa-Johnson, & James Mireles. STTR Virtual Reality Visualization of Complex and Unstructured Data and Relationships. no. FA864922P0047, USAF United States Air Force, 2021–2022

Nancy McElwain, Susan Caldecott-Johnson, Mark Hasegawa-Johnson, & Siraj Siddiqi and Romit Roy Choudhury. Early Detection Of Developmental Disorders Via A Remote Sensing Platform. no. 6, Jump ARCHES, 2020–2021

Karrie Karahalios, Siraj Siddiqi, David Forsyth, & Mark Hasegawa-Johnson and Hedda Meadan. Visualizations Of Social Communication Behavior Of Children With Autism. no. 5, Jump ARCHES, 2020–2021

Mark Hasegawa-Johnson, Suma Bhat, Dan Morrow, & James Graumlich. Using Conversational Agents To Support Older Adult Learning For Health. no. 4, UIUC College of Education Technology Innovation in Educational Research and Design, 2020–2021


Mark Hasegawa-Johnson. LanguageNet: Transfer Learning Across a Language Similarity Network. no. HR0011-15-2-0043, DARPA LO-REsource Languages for Emergent Incidents (LORELEI), 2015–2019


Mark Hasegawa-Johnson, Hanady Mansour Ahmed, Eiman Mustafawi, & Haitham El-Bashir. The Family as the Unit of Intervention for Speech-Generating Augmentative/Assistive Communication. no. NPRP 7-766-1-140, QNRF Qatar National Research Fund, 2014–2018


Mark Hasegawa-Johnson, Lav Varshney, & Preethi Jyothi. EAGER: Matching Non-Native Transcribers to the Distinctive Features of the Language Transcribed. no. 1550145, NSF IIS Division of Information and Intelligent Systems, 2015–2018

Dan Morrow, Suma Pallathadka Bhat, Mark Hasegawa-Johnson, Thomas Huang and James Graumlich, Ann Willemsen-Dunlap, & Don Halpin. Interactive Technology Support for Patient Medication Self-Management. no. 4, Jump ARCHES, 2016–2018

Mark Hasegawa-Johnson, Gautham Mysore, & Yang Zhang. Speech Resynthesis for Vocal Style Modification. no. 2015.m.uiuc, Adobe Research, 2015–2017

Nancy Chen, & Mark Hasegawa-Johnson. Mismatched Crowdsourcing for 80-Language Speech Recognition. no. 36, Institute for Infocomm Research (I$^2$R), Agency for Science, Technology, Advancement and Research (ASTAR), Singapore, 2015–2017


Mark Hasegawa-Johnson, Nancy Chen, & Boon Pang Lim. Noisy Channel Models for Massively Multilingual Automatic Speech Recognition. no. 9, Advanced Digital Sciences Center (ADSC), Singapore, 2015–2017


Daniel G Morrow, Mark Hasegawa-Johnson, Thomas Huang, & William Schuh. Collaborative Patient Portals: Computer-based Agents and Patients’ Understanding of Numeric Health Information. no. R21-HS022948, AHRQ Agency for Healthcare Research and Quality, 2014–2016

Mark Hasegawa-Johnson, Gregg Wilensky, & Xuesong Yang. Speech2Vec Speech-Based Semantic Vectors. no. 2015.w.uiuc, Adobe Research), 2015–2016


Jia Chen Ren, Lawrence Angrave, & Mark Hasegawa-Johnson. Capturing; Transcribing; Searching; Analyzing; Adaptive: Learning in a curated classroom. no. 5, Illinois Learning Sciences Design Initiative (ILSDI), University of Illinois, 2015–2016


Marshall Poole, David Forsyth, Feniosky Pena-Mora, Mark Hasegawa-Johnson and Kenton McHenry, & Peter Bajcsy. CDI-Type II: Collaborative Research: Groupscope: Instrumenting Research on Interaction Networks in Complex Social Contexts. no. 0941268, NSF BCS Division Of Behavioral and Cognitive Sci, 2010–2015

Mark Hasegawa-Johnson, Laura DeThorne, Tracy Gunderson, Julie Hengst, & Thomas Huang and Pat Malik. Pseudo-intelligent mediators (“Robo-Buddies”) to improve communication between students with and students without physical disabilities. no. 1, Illinois Innovation Initiative (In3), 2011–2015


Richard Beraniuk, Volkan Cevher, Lydia Kavraki, Wotao Yin, John Benedetto, Rama Chellappa, Larry Davis, Tamer Basar, Mark Hasegawa-Johnson, Thomas Huang, Ronald Coifman, Lawrence Carin, & Stanley Osher. Opportunistic Sensing for Object and Activity Recognition from Multi-Modal, Multi-Platform Data. no. W911NF-09-1-0383, ARO MURI, 2009–2014


Mark Hasegawa-Johnson, Camille Goudeseune, Hank Kaczmarski, & Thomas Huang. FODAVA-Partner: Visualizing Audio for Anomaly Detection. no. 08-07329, NSF CCF Division of Computing and Communication Foundations, 2008–2013


Mark Hasegawa-Johnson, & Eiman Mustafawi. Multi-dialect phrase-based speech recognition and machine translation for Qatari broadcast TV. no. NPRP 09-410-1-069, QNRF Qatar National Research Fund, 2010–2013

Julie Hengst, Laura S. DeThorne, Ai Leen Choo, Mariana Aparacio Betancourt and Mark Hasegawa-Johnson, Karrie Karahalios, Paul Prior, Hedda Meadan-Kaplansky and David Gooler, Tracy Gunderson, & Andrew Moss. Conversation Strategies for Students With and Students Without Physical Disabilities. no. 3, University of Illinois Graduate College Focal Point Program, 2012–2013

Jennifer Cole, & Mark Hasegawa-Johnson. RI-Collaborative Research: Landmark-based robust speech recognition using prosody-guided models of speech variability. no. 07-03624, NSF IIS Division of Information and Intelligent Systems, 2007–2012


Torrey Loucks, Chilin Shih, Ryan Shosted, & Mark Hasegawa-Johnson. Speech Production Research Initiative. no. 2, University of Illinois Graduate College Focal Point Program, 2010–2011

Mark Hasegawa-Johnson, Adrienne Perlman, Thomas Huan, & Jon Gunderson. Audiovisual Distinctive-Feature-Based Recognition of Dysarthric Speech. no. 05-34106, NSF IIS Division of Information and Intelligent Systems, 2005–2010

Richard Sproat, J. Kathryn Bock, Brian Ross, Mark Hasegawa-Johnson, & Chi-Lin Shih. DHB: An Interdisciplinary Study of the Dynamics of Second Language Fluency. no. 06-23805, NSF IIS Division of Information and Intelligent Systems, 2006–2010

Mark Hasegawa-Johnson, Thomas Huang, & Dirk Bernhardt-Walther. RI Medium: Audio Diarization – Towards Comprehensive Description of Audio Events. no. 08-03219, NSF IIS Division of Information and Intelligent Systems, 2008–2010


Mark Hasegawa-Johnson, Adrienne Perlman, Thomas S Huang, Jon Gunderson and Ken Watkin, & Heejin Kim. Audiovisual Description and Recognition of Audible and Visible Dysarthric Phonology. no. 5R21 DC008090, NIH NIDCD), 2006–2009

Mark Hasegawa-Johnson. CAREER: Landmark-Based Speech Recognition in Music and Speech Backgrounds. no. 01-32900, NSF IIS Division of Information and Intelligent Systems, 2002–2007

Mark Hasegawa-Johnson, Jennifer Cole, & Chi-Lin Shih. Prosodic, Intonational, and Voice Quality Correlates of Disfluency. no. 04-14117, NSF IIS Division of Information and Intelligent Systems, 2004–2007

Richard Sproat, Chilin Shih, Mark Hasegawa-Johnson, Dan Roth, & Kay Bock and Brian Ross. Automated Methods for Second-Language Fluency Assessment. no. 3, University of Illinois Critical Research Initiative, 2005–2007

Thomas S Huang, & Mark Hasegawa-Johnson. Audiovisual Emotional Speech AVATAR. no. RPS 31, Motorola Communications Center, 2005–2007

Mark Hasegawa-Johnson. Rhythmic Organization of Durations for Automatic Speech Recognition. no. 6, UIUC Research Board, 2005–2006


Mark Hasegawa-Johnson, & Thomas S Huang. Audiovisual Speech Recognition: Data Collection and Feature Extraction in Automotive Environment. no. RPS 19, Motorola Communications Center, 2002–2005


Mark Hasegawa-Johnson, & Jennifer Cole. Prosody-Dependent Speech Recognition. no. 2, University of Illinois Critical Research Initiative, 2002–2004

Weimo Zhu. Development and Validation of An E-diary System for Assessing Physical Activity and Travel Behaviors. no. 47334, Robert Wood Johnson Foundation, 2002–2004


Mark Hasegawa-Johnson. Immersive Headphone-free Virtual Reality Audio. no. 23, University of Illinois Research Board, 2001–2002


Mark Hasegawa-Johnson. Acoustic Features for Phoneme Recognition. no. 2002.1.4, Phonetact Incorporated, 2002


Mark Hasegawa-Johnson. Factor Analysis of the Tongue Shapes of Speech. no. 7, University of Illinois Research Board, 1999–2000

Mark Hasegawa-Johnson. Factor Analysis of MRI-Derived Articulator Shapes. Individual National Research Service Award, no. 1 F32 DC000323, NIH NIDCD, 1997–1999