Deep Neural Network-based Speaker-Aware Information Logging for Augmentative and Alternative Communication

Deep Neural Network-based Speaker-Aware Information Logging for Augmentative and Alternative Communication

Authors

  • Gang Hu State University of New York Buffalo State, USA
  • Szu-Han Kay Chen State University of New York, USA
  • Neal Mazur State University of New York, USA

DOI:

https://doi.org/10.37965/jait.2021.0017

Keywords:

augmentative and alternative communication (AAC), outcome measures, visual logs, hand tracking, deep learning

Abstract

People with complex communication needs can use a high-technology augmentative and alternative communication device to communicate with others. Currently, researchers and clinicians often use data logging from high-tech augmentative and alternative communication devices to analyze augmentative and alternative communication user performance. However, existing automated data logging systems cannot differentiate the authorship of the data log when more than one user accesses the device. This issue reduces the validity of the data logs and increases the difficulties of performance analysis. Therefore, this paper presents a solution using a deep neural network-based visual analysis approach to process videos to detect different augmentative and alternative communication users in practice sessions. This approach has significant potential to improve the validity of data logs and ultimately to enhance augmentative and alternative communication outcome measures.

Metrics

Metrics Loading ...

Downloads

Published

2021-04-19

How to Cite

Hu, G., Szu-Han Kay Chen, & Neal Mazur. (2021). Deep Neural Network-based Speaker-Aware Information Logging for Augmentative and Alternative Communication. Journal of Artificial Intelligence and Technology, 1(2), 138–143. https://doi.org/10.37965/jait.2021.0017

Issue

Section

Research Articles
Loading...