“staRt” Biofeedback App

Current DAT Group Members & Roles

  • Heather – Communicative Sciences and Disorders – NYU Steinhardt – http://heathercampbellslp.wordpress.com/
    • Role: coordination between teams; tutorial development, deriving formula for template generation
  • Dalit –  Interactive Telecommunications – NYU Tisch – dalitshalom.com
    • Role: UX development; logo design
  • Helen – Integrated Digital Media – NYU Polytech –  helencarey@nyu.edu
    • Role: UX implementation, coordination with programmer
  • Celine – Integrated Digital Media – NYU Polytech –  cw1208@nyu.edu
    • Role: UX implementation; logo design

Screen Shot 2014-11-10 at 12.55.07 AM

This semester’s project picks up where last semester’s DAT team left off on an innovative and exciting  project. The current DAT team joins a line of project contributors so far:

  • Tara McAllister Byun, PhD, CCC-SLP –  project leader –  professor in Communicative Sciences and Disorders department at NYU Steinhardt
  • Mario Svirsky, PhD –  biomedical engineer at NYU Langone
  • Tae Hin Spring 2014
  • Eddie Chen –  PolyTech –  former DAT team member
  • Hsuan Chang –  NYU Occupational Therapy department –  former DAT team member

Schematic Timeline

flowchart-dat-color-small

===================================================

Rationale for Product Development

While statistics on speech and language disorders in US children are limited, the American Speech-Language-Hearing Association (ASHA) estimates that speech sound disorders (SSDs) affect 10% of US children. The approximate prevalence of language difficulty in preschool-age children is between 2% and 19% (2011). By the first grade, the prevalence rate of noticeable speech disorders drops to roughly 5%; the majority of these speech disorders have no known cause (NIDCD 2010). After the age of 8, spontaneous resolution of speech errors becomes increasingly unlikely.

Of the residual articulatory errors, the /r/ sound is the most commonly reported misarticulation (Ruscello, 1995). The American /r/ requires a complex articulatory configuration, with the major constrictions concealed inside the mouth. This makes /r/ particularly hard to treat, as the clinician is unable to provide visual cues and successful articulation lacks strong tactile-kinesthetic cues. While the causes of persistent misarticulation in individuals with no gross deficits in oral structure and function is generally unknown, many believe that an inability to perceive the difference between correct and incorrect /r/ plays a role in the misarticulation.

Recent research (McAllister Byun & Hitchcock, 2012; McAllister Byun, Maas, & Schwartz, 2013; Preston, Brick, & Landi, 2013) supports the hypothesis that treatment for SSDs can be enhanced through the incorporation of visual biofeedback technologies. By viewing real-time visualizations of their speech, individuals can learn to manipulate their output to match a visual target representing a correct /r/ sound. As with other biofeedback techniques, acoustic biofeedback for speech therapy aims to help an individual develop and retain specific motor skills. Recent work has found that the application of motor-learning principles from other disciplines may enhance learning in SSD interventions (e.g., Maas et al., 2008), however further research is needed to establish the role of speech-motor learning and the potential impact of visual biofeedback in the context of speech therapy for persistent articulatory disorders.

Works Cited

(see Lit Review section)

===================================================

Project Origin

The idea for “staRt” stems from the research of Dr. Tara McAllister Byun, which investigates the efficacy of visual acoustic biofeedback therapy for children ages 9-14. Dr. McAllister Byun’s lab uses a live linear predictive coding (LPC) spectrum to help clients adjust their speech in real-time to improve their /r/ sound. The lab’s current software/hardware package is very expensive and requires specialized training. A current need in the field of speech and language pathology is an iOS app that not only provides an LPC spectrum, but is affordable and guides a speech-language pathologist (SLP) through biofeedback therapy sessions with clients. The final product will be a free and user-friendly app that SLPs can use with their clients who have struggles saying the /r/ sound and have not responded to traditional speech therapy techniques.

Name Development:  The “staRt” app is a work in progress. It was temporarily called the “aRt” app, and its name is still in development. The name “staRt” is an acronym standing for “Speech Therapist’s application for R Treatment”.

logo

Description/Materials: Most of app’s current code is in Objective C. The GUI is being developed in html5.  To increase cross-platform compatibility, we hope to transition as much code as possible to web languages.

===================================================

Literature Review

 Speech Language Pathology

Mcallister Byun Lab home

McAllister Byun Lab iOS description

McAllister Byun Lab Biofeedback video

Wall Street Journal features McAllister Byun Lab, 10/20/14

Article in Wall Street Journal about speech therapy apps and biofeedback

American Speech-Language-Hearing Association. (2011). Speech-Language Pathology Medical Review Guidelines. Retrieved from http://www.asha.org/uploadedFiles/SLP-Medical-Review-Guidelines.pdf

Boyce, S. E., Hamilton, S. M., Scholl, L. M., & Schmidlin, S. (2013). Defining the Acoustic Targets for Correct Production of /r/. Poster presented at the American Speech-Language Hearing Association Conference.

McAllister Byun, T. , Hitchcock, E. R., & Swartz, M. T. (2014). Retroflex Versus Bunched in Treatment for Rhotic Misarticulation: Evidence From Ultrasound Biofeedback Intervention. Journal of Speech, Language, and Hearing Research, 1-15.

Hagiwara, R. (1995). Acoustic realizations of American/r/as produced by women and men (Vol. 90): Phonetics Laboratory, Dept. of Linguistics, UCLA.

Hitchcock, E. R., & McAllister Byun, T. (2014). Enhancing generalisation in biofeedback intervention using the challenge point framework: A case study. Clinical Linguistics and Phonetics, 0, 1-17.

Klein, H., Grigos, M., McAllister Byun, T., & Davidson, L. (2012). The relationship between listeners’ perceptions and acoustic correlates of children’s /r/ productions. Clinical Linguistics and Phonetics, 26, 628-645.

Klein, H., McAllister Byun, T., Davidson, L., & Grigos, M. (2013). A multidimensional investigation of children’s /r/ productions: Perceptual, ultrasound, and acoustic measures. American Journal of Speech-Language Pathology, 22, 540-553.

Lee, S., Potamianos, A., & Narayanan, S. (1999). Acoustics of children’s speech: Developmental changes of temporal and spectral parameters. J Acoust Soc Am, 105(3), 1455-1468.

Lewis, B. A., & Shriberg, L. D. (1994, November). Life span interrelationships among speech, prosody-voice, and nontraditional phonological measures. Miniseminar presented at the Annual Convention of the American Speech-Language-Hearing Association, New Orleans, LA.

Maas, E., & Farinella, K.A. (2012). Random versus blocked practice in treatment for childhood apraxia of speech. Journal of Speech, Language, and Hearing Research, 55 , 561-578.

Maas, E., Robin, D. A., Austermann Hula, S. N., Freedman, S. E., Wulf, G., Ballard, K. J., & Schmidt, R. A. (2008). Principles of motor learning in treatment of motor speech disorders. American Journal of Speech-Language Pathology, 17, 277-298.

McAllister Byun, T., & Hitchcock, E. R. (2012). Investigating the use of traditional and spectral biofeedback approaches to intervention for /r/ misarticulation. American Journal of Speech-Language Pathology, 21, 207-221. (Link to Abstract)

McAllister Byun, T., Maas, E., & Swartz, M. T. (2013). Motor learning principles in /r/ treatment: Direction of attentional focus. Talk presented at ASHA 2014, Chicago, IL. Retrieved from https://www.academia.edu/5184225/Motor_learning_principles_in_r_treatment_Direction_of_attentional_focus_McAllister_Byun_Maas_and_Swartz_

NIDCD Health Information. (2010, June 07). Quick Stats for Voice, Speech, and Language. Retrieved from http://www.nidcd.nih.gov/health/statistics/vsl/Pages/stats.aspx.

Preston, J. L., Brick, N., & Landi, N. (2013). Ultrasound biofeedback training for children with persisting childhood apraxia of speech. Am J Speech Lang Pathol. 2013 Nov;22(4):627-43.

Ruscello, D. M. (1995). Visual feedback in treatment of residual phonological disorders. Journal of Communication Disorders, 28, 279-302.

Development

Apple, Inc. (2014, Oct 20, last update). iOS Human Interface Guidelines. Retrieved from https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/index.html#//apple_ref/doc/uid/TP40006556

Apple, Inc. (2014, Sept. 17, last update). App Programming Guide for iOS. Retrieved from https://developer.apple.com/library/ios/documentation/iPhone/Conceptual/iPhoneOSProgrammingGuide/iPhoneAppProgrammingGuide.pdf

Apple, Inc. (2014, Mar 10, last update). Safari Web Content Guide. Retrieved from https://developer.apple.com/library/mac/documentation/AppleApplications/Reference/SafariWebContent/SafariWebContent.pdf

Pilgrim, M. (2010). HTML5: Up and running. Sebastopol, CA: O’Reilly.

Stark, J. (2010). Building iPhone apps with HTML, CSS, and JavaScript. Beijing: O’Reilly.

Design

Bedford, A. (2014, Sept. 21). Animation for Attention and Comprehension. Nielsen Norman Group: Evidence-Based User Experience Research. Retrieved from http://www.nngroup.com/articles/animation-usability/.

Bedford, A. (2014, Feb. 16). Instructional Overlays and Coach Marks for Mobile Apps. Nielsen Norman Group: Evidence-Based User Experience Research. Retrieved from http://www.nngroup.com/articles/mobile-instructional-overlay/.

Krug, S. (2014). Don’t make me think, revisited: A common sense approach to Web usability (3rd ed.). Berkeley, CA: New Riders.

Norman, D. A. (2013). The design of everyday things (Revised and expanded edition). Cambridge, MA: MIT Press.

===================================================

Contextual Research

Over multiple visits, our group discussed the project with members of the McAllister Byun Lab. We also visited the Lab to observe existing lab-based biofeedback technologies and reenactments by the lab’s SLP of typical introductory biofeedback therapy sessions.

To better understand the needs of potential end users, we  conducted 2 semi-structured, exploratory interviews regarding current issues in the field of speech language pathology. 

The first interview was with a rural Iowa-based SLP with more than 25 years of providing SLP services to a 4 country area, and the second was with a suburban Iowa-based ESL teacher, currently employed as an SLPA (speech language pathology assistant) with hopes returning to school for full SLP training in the future.

Based on our experience with the SLPS at the McAllister Byun Lab and our interviews, we developed the following user personas.

persona1

persona2

persona3

===================================================

Competitive Analysis

StaRt CompAnalysis

===================================================

Definition of Development Goals

Target Population

Practicing SLPs (in school, clinic, or home setting) willing to pilot the app for an 8-week course of biofeedback therapy with native American-English-speaking clients aged 7-14 years with treatment-resistant /r/ misarticulations. (Patient exclusion criteria:   language comprehension below age norms, gross deficits in oral structure and function, or history of behavioral, neurological, or hearing impairment.)

Application Components/Experiences

  • LPC spectrum visualization
    • should be the focus of the app
    • should include options for audio and video capture, playback, and comparison
    • must be able to overlay waveform templates onto the dynamic LPC display
  • Interactive tutorial
    • introduce SLPs and clients to biofeedback therapy
    • elicit baseline pre-therapy vocal samples
    • guide SLP and client through the process necessary to select individualized LPC display parameters and match the user to a personalized treatment target
    • remind users of essential aspects of the research protocol and how to avoid the introduction of error or bias into the dataset
  • Structured and Unstructured Practice Modules
    • syllable-based, word-based, and protocol-based structured practice modules in which the app presents a model of the target sound, prompts the user to repeat the target while viewing the biofeedback display, and pauses for clinician feedback
    • a free practice module, in which participants can interact with their LPC target without structured prompts or accuracy tracking
  • Participant profiles
    • store user demographics and clinical trial registration information
    • track and display usage, progress, and accuracy statistics
    • store individualized training plan, schedule, and reminders
    • allow for manual adjustment of individual therapy template
  • Settings
    • allow for manual adjustments of the LPC display
    • allows for manual adjustments to the current target

===================================================

Proposed Sitemap & User Flow

copy_of_user_flow

 

===================================================

Sketches & Iterations

Screen Shot 2014-12-17 at 10.26.19 PM

Final Visualization

———–

Screen Shot 2014-12-17 at 2.51.43 AM

Preliminary Tutorial outlines

Screen Shot 2014-12-17 at 2.51.31 AM

 

Oyster-and-StarfishScreen Shot 2014-12-17 at 2.51.18 AM

Dalit’s UX Mock-up, version 3

Screen Shot 2014-12-17 at 2.50.50 AM

Dalit’s UX Mock-up, version 4

Screen Shot 2014-12-17 at 2.51.04 AM

Screen Shot 2014-12-17 at 2.50.18 AM

Dalit’s UX Mock-Up, version 5

Dalit’s Hi-fi Mock-Up

===================================================

Results of User Trials

While qualitative user testing was performed throughout the development process, a primary goal for the team is to produce a workable version ready for piloting in Dr. McAllister Byun’s lab and remotely with SLPs. Current participants in the lab will use the app in a pilot study to determine the efficacy of the app as compared with the desktop hardware setup. A tentative timeline for more formal user testing is as follows:

  • Piloting the first functioning version of the iOS app starts in January 2014
    • Invite previous participants back to the lab
    • Guide them through interaction with the app
    • Interviews probing opinions on the usability of the app
    • Revise app based on pilot feedback
    • Invite participants back to re-test the app and provide qualitative feedback
    • Repeat until the prototype is deemed usable by previous participants
  • Begin a focus group of 8 SLPs to test the app and provide comments on usability and the feasibility of adopting such a device in their own practices
  • iOS study
    • More than 60 SLP/client dyads are currently interested!!

===================================================

Leave a Reply