Old Dominion University

College of Engineering and Technology

Department of Electrical and Computer Engineering

All lectures to be held at 3:00pm on Fridays online via: https://vs.prod.odu.edu/kvs/zoom_odu/?cid=202410_ECE731VS_11021&nonmidas=1

Choose the non-Midas login:

NON-MIDAS LOGIN

Username/Email: access@odu.edu     Password: courseACCESS#1

For more information, contact Dr. Chung Hao Chen at (757) 683-3475 or email cxchen@odu.edu.

Friday, October 4th Seminar Topic:

COMPUTATIONAL ANALYSIS OF FACIAL EXPRESSION PRODUCTION AND PERCEPTION FOR AUTISM CANDIDATE BIOMARKER DISCOVERY by Megan Witherow, Post-Doctoral Research Associate in the Vision Lab at Department of Electrical and Computer Engineering at Old Dominion University

Abstract:

The heterogeneity of perception and production of facial expressions in autism spectrum disorder (ASD) suggests the potential presence of behavioral biomarkers that may stratify individuals on the spectrum into more internally homogeneous subgroups. Such stratification biomarkers may identify prognostic subgroups with different tracks of longitudinal symptom development or treatment subgroups for selective enrollment in interventions, e.g., to improve social skills. High-speed internet and the ease of access to technology have enabled remote, scalable, affordable, and timely access to medical care, such as measurements of ASD-related facial expression behaviors in familiar environments to complement clinical observation. Computational analysis of video tracking (VT) of facial expression production and eye tracking (ET) of facial expression perception may aid in the discovery of stratification biomarkers for children and young adults diagnosed with ASD. Deep learning techniques such as convolutional neural networks have shown promise for fine-grained facial expression analysis (FEA) of VT data based on the Facial Action Coding System (FACS). However, there are open challenges in FEA across age groups to overcome the domain shift between adult and child facial expressions, FACS-labeled 3D avatar-based stimuli to improve user engagement for eliciting facial expressions, and evaluation of behavioral measurements (production and perception) using ASD candidate biomarker selection criteria (construct validity and group discriminability). Therefore, we propose novel contrastive, deep domain adaptation fusing deep texture features with geometric landmark features for age-invariant child/adult FEA, develop FACS-labeled customizable avatars for improved user engagement, and conduct an online pilot study of 11 autistic children and young adults and 11 age- and gender-matched neurotypical (NT) individuals. Participants complete validated facial expression recognition and mimicry tasks using the FACS-labeled 3D avatar-based stimuli while their facial expression production and perception are captured by webcam-based VT and ET. Domain-adapted deep learning models are used for FEA of the collected VT data. We assess construct validity, i.e., that the tasks measure the intended phenomena, via analysis of variance of the NT group’s responses. For group discriminability (ASD or NT), the Boruta statistical method circumvents unrealistic assumptions of normality and independence in the ASD group in order to capture and showcase group behaviors. Extensive statistical analyses identify one candidate ET biomarker and 14 additional ET and VT measurements that may be candidates for more comprehensive future studies with increased sample size for validation and clinical translation.

Bio: Megan A. Witherow is a Post-Doctoral Research Associate with the Research Foundation appointed to the Vision Lab, Dept. of Electrical & Computer Engineering, Old Dominion University. She received her Ph.D. in Electrical and Computer Engineering in May 2024 and her B.S. in Computer Engineering in May 2018 from Old Dominion University, Norfolk, VA, USA. From September 2020 to May 2024, she was a National Science Foundation Graduate Research Fellow. Her research interests include computer vision, deep learning, human-computer and human-robot interaction, affective computing, medical image analysis, and responsible AI.