0
Technical Brief

Design of a Low-Cost Social Robot for Children With Complex Communication Needs1 OPEN ACCESS

[+] Author and Article Information
Christabel Jananii Vaz, Eric Wade

Department of Mechanical, Aerospace,
and Biomedical Engineering,
University of Tennessee,
Knoxville, TN 37996

DOI: 10.1115/1.4033765Manuscript received March 1, 2016; final manuscript received March 17, 2016; published online August 1, 2016. Editor: William Durfee.

J. Med. Devices 10(3), 030943 (Aug 01, 2016) (2 pages) Paper No: MED-16-1091; doi: 10.1115/1.4033765 History: Received March 01, 2016; Revised March 17, 2016
FIGURES IN THIS ARTICLE

Each year in the U.S., 8000–12,000 children are born with detectable levels of hearing loss in one or both ears (see Ref. [1], Centers for Disease Control). Research supported by the National Institutes of Health suggests that the most intensive period of speech and language development is during the first 3 years of life [2]. These children will also often fall behind their peers in language, cognitive, and social skills [2]. However, in recent years, the U.S. Food and Drug Administration has approved cochlear implants for children beginning at the age of 12 months. A cochlear implant is an auditory stimulation device that bypasses damaged hair cells in the cochlea, which prevents the reception of sound, to send electrical current directly to the auditory nerve [35]. The postsurgery rehabilitation process often includes augmentative and alternative communication (AAC) techniques. For people with severe speech or language problems, AAC is a commonly used supplement for existing speech or in some cases even replaces nonfunctional speech [6]. Individuals with complex communication needs, such as hearing or speaking impairments, often rely on AAC, such as a computer programed communications board with graphic symbols and a voice output for communication [7].

For children with cochlear implants, telerehabilitation may be a powerful tool for the recovery of hearing capacity. This typically consists of a child at home with the parent, and a clinician using real-time video streaming software to perform therapeutic interventions that consist of modeling and repeating specific words. Unfortunately, children may lose interest. Further, ensuring that these interventions are performed consistently is a challenge. Thus, there is a need for low-cost, easy-to-use systems capable of consistently and accurately providing this motivational interaction. A recent technological innovation that shows promise for such interactions is socially assistive robotics (SAR). SAR describes robots capable of providing hands-off, social assistance for medical and therapeutic outcomes [810]. SAR systems utilize social interaction to create close and effective interactions with humans for the purpose of giving assistance and achieving measureable progress in convalescence, rehabilitation, learning, and other domains. In the following, we discuss the development of an SAR system as a form of AAC for rehabilitation of children who have received cochlear implants, with particular attention to the hardware and software design and initial pilot testing.

The overall goal is to develop a system capable of autonomous supervision of a home-based AAC intervention. In such an interaction, the robot faces the child and prompts the child with specific words (Fig. 1). The child then repeats those words. Once the robot has acknowledged the child's response, it moves on to a new verbalization.

The design requirements imposed by the interaction on the robot include the ability to: (1) prompt the child using images and videos, (2) provide intervention-specific stimuli, (3) ensure that the child's voice can be heard, and (4) contingently guide the child through the interaction. Based on these requirements, we discuss the software and hardware design in Secs. 2.1 and 2.2.

Hardware Design.

The robotic system consists of an off-the-shelf social robot used as the “base” of the system, and a smartphone used to display the robot's face and to provide the user with feedback. The robotic platform Rapiro2 is a commercially available programmable system with 12 degrees-of-freedom. It contains a Raspberry Pi board for communication, and an Arduino board for servo motor control. For our design, it was necessary to control the verbal and nonverbal gestures. Thus, we removed the head of the commercial version and created a custom 3D-printed shell capable of holding an Android-based smartphone (Fig. 1).

During the interaction, the robot's motion is limited. The primary motion of interest is the movement of the left arm. In order to ensure that the child verbalizes in close proximity to the Android device, the robot lifts a microphone after prompting the child to speak (the microphone on the Android is used to capture the child's verbalization).

Software Design.

There are two software applications required for the therapeutic interaction. The first runs on the Android smartphone. We use the MIT appinventor3 software to govern the robot's behavior. This software allows the user to build input/output applications with access to the phone hardware. The face of the robot is controlled using images created by the authors. Of particular importance is the mouth, which must realistically move in response to robot verbalizations. We use eight separate images for closed mouth, consonants, and vowel sounds (Fig. 2, see Ref. [11]).

MIT appinventor is also used as the verbal and nonverbal gesture controller. These gestures are performed in a specific sequence governed by the interaction (Fig. 3). Initially, the robotic system provides introductory text to the child with a prompt. It then performs a prespecified number of trials. In each trial, the robot verbalizes the word while providing an accompanying image, uses voice recognition to wait for the child's response, and provides an acknowledgement and positive feedback (using facial gestures and verbalization). No negative feedback is required—if the robot is unable to “hear” the child, it repeats the prompt. The full process is depicted in Fig. 3.

The entire interaction is controlled using custom python code samples that command both the phone gestures and the robot movements. Thus, python scripts running on the Pi board send coordinated movement commands to the Arduino board and facial gesture commands to the Android device.

The system performance and robustness were evaluated with a number of pilot participants during an engineering outreach day at the University of Tennessee. High school students from the greater Knoxville, TN, area interacted with the robot during a single session. During this interaction, five students were prompted to say their names by the robot, and the robot then responded by repeating the students' names back to them. The robot was able to repeat three of the five names back with no difficulty. Though this is different from the full interaction described in Fig. 3, the individual steps were validated in this pilot interaction. Specifically, the robot provided prompts (Fig. 3, step 1), facial stimuli (steps 2, 4, and 5), and user contingent visual and audio stimuli (steps 3 and 4). Further, the ability of the system to perform successfully in a noisy environment (over 15 participants grouped around the robot) demonstrated robustness to environmental sound. Although the current model does not measure accuracy of verbalization, future work will incorporate voice recognition technology.

As for the cost of the robot, Rapiro and the Raspberry Pi board have a commercial cost of approximately $550, and the smartphone costs approximately $200. The 3D printing cost is minimal; thus, the total cost of the system is roughly $750. This is relatively low-cost compared to similar social robots.

In our pilot implementation, we validated that the robot was able to perform the specific subtasks required for a therapeutic interaction. In further testing, we will evaluate the robot with nonimpaired children in the target age group and children with cochlear implants.

The authors would like to thank Nara Paz Pereira for her assistance on the facial features and animations of the robot. We would also like to thank Brett Brownlee and Sadra Hemmati for their assistance on the robot design and programing.

Martin, J. A. , Hamilton, B. E. , Osterman, M. J. K. , Curtin, S. C. , and Mathews, T. J. , 2015, “ Births: Final Data for 2013,” Natl. Vital Stat. Rep., 64(1), pp. 1–65.
National Institute of Health, 2010, “ Newborn Hearing Screening,” U.S. Department of Health & Human Services, Washington, DC.
ASHA, 2016, “ Cochlear Implant Frequently Asked Questions,” The American Speech-Language-Hearing Association (ASHA), Rockville, MD.
Conner, C. M. , 2006, “ Examining the Communication Skills of a Young Cochlear Implant Pioneer,” J. Deaf Stud. Deaf Educ., 11(4), p. 449H60.
Loizou, P. C. , 1999, “ Introduction to Cochlear Implants,” IEEE Eng. Med. Biol. Mag., 18(1), pp. 32–42. [CrossRef] [PubMed]
ASHA, 2016, “ Augmentative and Alternative Communication (AAC),” American Speech-Language-Hearing Association (ASHA), Rockville, MD.
Thistle, J. J. , and Wilkinson, K. M. , 2015, “ Building Evidence-Based Practice in AAC Display Design for Young Children: Current Practices and Future Directions,” Augmentative Altern. Commun., 31(2), pp. 124–136. [CrossRef]
Tapus, A. , and Mataric, M. J. , 2006, “ Towards Socially Assistive Robotics,” J. Rob. Soc. Jpn., 24(5), pp. 576–578. [CrossRef]
Feil-Seifer, D. , and Mataric, M. J. , 2011, “ Socially Assistive Robotics,” IEEE Rob. Autom. Mag., 18(1), pp. 24–31. [CrossRef]
Feil-Seifer, D. , and Matarić, M. J. , “ Defining Socially Assistive Robotics,” IEEE 9th International Conference on Rehabilitation Robotics (ICORR 2005), Chicago, IL, June 28–July 1, pp. 465–468.
Maestri, G. , 2002, Digital Character Animation 2, Vol. II, New Riders Publishing, Indianapolis, IN.
Copyright © 2016 by ASME
Topics: Robots , Design , Hardware
View article in PDF format.

References

Martin, J. A. , Hamilton, B. E. , Osterman, M. J. K. , Curtin, S. C. , and Mathews, T. J. , 2015, “ Births: Final Data for 2013,” Natl. Vital Stat. Rep., 64(1), pp. 1–65.
National Institute of Health, 2010, “ Newborn Hearing Screening,” U.S. Department of Health & Human Services, Washington, DC.
ASHA, 2016, “ Cochlear Implant Frequently Asked Questions,” The American Speech-Language-Hearing Association (ASHA), Rockville, MD.
Conner, C. M. , 2006, “ Examining the Communication Skills of a Young Cochlear Implant Pioneer,” J. Deaf Stud. Deaf Educ., 11(4), p. 449H60.
Loizou, P. C. , 1999, “ Introduction to Cochlear Implants,” IEEE Eng. Med. Biol. Mag., 18(1), pp. 32–42. [CrossRef] [PubMed]
ASHA, 2016, “ Augmentative and Alternative Communication (AAC),” American Speech-Language-Hearing Association (ASHA), Rockville, MD.
Thistle, J. J. , and Wilkinson, K. M. , 2015, “ Building Evidence-Based Practice in AAC Display Design for Young Children: Current Practices and Future Directions,” Augmentative Altern. Commun., 31(2), pp. 124–136. [CrossRef]
Tapus, A. , and Mataric, M. J. , 2006, “ Towards Socially Assistive Robotics,” J. Rob. Soc. Jpn., 24(5), pp. 576–578. [CrossRef]
Feil-Seifer, D. , and Mataric, M. J. , 2011, “ Socially Assistive Robotics,” IEEE Rob. Autom. Mag., 18(1), pp. 24–31. [CrossRef]
Feil-Seifer, D. , and Matarić, M. J. , “ Defining Socially Assistive Robotics,” IEEE 9th International Conference on Rehabilitation Robotics (ICORR 2005), Chicago, IL, June 28–July 1, pp. 465–468.
Maestri, G. , 2002, Digital Character Animation 2, Vol. II, New Riders Publishing, Indianapolis, IN.

Figures

Grahic Jump Location
Fig. 1

The social robot provides a microphone to encourage the participant to verbalize close to the robot and visual stimuli to elicit the child's response

Grahic Jump Location
Fig. 2

Eight images used to simulate mouth gestures with robot verbalization

Grahic Jump Location
Fig. 3

Interactions consist of the robot prompting the child, awaiting the child's response, and subsequently providing feedback

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In