Sample Lab Publications
Namasivayam, A. K., Pukonen, M., Goshulak, D., Hard, J., Rudzicz, F., Rietveld, T., Maassen, B., Kroll, R. & van Lieshout, P. (in press). Treatment intensity and childhood apraxia of speech. Int J Lang Commun Disord.
BACKGROUND: Intensive treatment has been repeatedly recommended for the treatment of speech deficits in childhood apraxia of speech (CAS). However, differences in treatment outcomes as a function of treatment intensity have not been systematically studied in this population.
Namasivayam, A. K., Wong, W. Y. S., Sharma, D. & van Lieshout, P. (2015). Visual speech gestures modulate efferent auditory system. J Integr Neurosci, 14(1), 73-83.
AIM: To investigate the effects of treatment intensity on outcome measures related to articulation, functional communication and speech intelligibility for children with CAS undergoing individual motor speech intervention.
METHODS & PROCEDURES: A total of 37 children (32-54 months of age) with CAS received 1×/week (lower intensity) or 2×/week (higher intensity) individual motor speech treatment for 10 weeks. Assessments were carried out before and after a 10-week treatment block to study the effects of variations in treatment intensity on the outcome measures.
OUTCOMES & RESULTS: The results indicated that only higher intensity treatment (2×/week) led to significantly better outcomes for articulation and functional communication compared with 1×/week (lower intensity) intervention. Further, neither lower nor higher intensity treatment yielded a significant change for speech intelligibility at the word or sentence level. In general, effect sizes for the higher intensity treatment groups were larger for most variables compared with the lower intensity treatment group.
CONCLUSIONS & IMPLICATIONS: Overall, the results of the current study may allow for modification of service delivery and facilitate the development of an evidence-based care pathway for children with CAS.
Visual and auditory systems interact at both cortical and subcortical levels. Studies suggest a highly context-specific cross-modal modulation of the auditory system by the visual system. The present study builds on this work by sampling data from 17 young healthy adults to test whether visual speech stimuli evoke different responses in the auditory efferent system compared to visual non-speech stimuli. The descending cortical influences on medial olivocochlear (MOC) activity were indirectly assessed by examining the effects of contralateral suppression of transient-evoked otoacoustic emissions (TEOAEs) at 1, 2, 3 and 4 kHz under three conditions: (a) in the absence of any contralateral noise (Baseline), (b) contralateral noise + observing facial speech gestures related to productions of vowels /a/ and /u/ and (c) contralateral noise + observing facial non-speech gestures related to smiling and frowning. The results are based on 7 individuals whose data met strict recording criteria and indicated a significant difference in TEOAE suppression between observing speech gestures relative to the non-speech gestures, but only at the 1 kHz frequency. These results suggest that observing a speech gesture compared to a non-speech gesture may trigger a difference in MOC activity, possibly to enhance peripheral neural encoding. If such findings can be reproduced in future research, sensory perception models and theories positing the downstream convergence of unisensory streams of information in the cortex may need to be revised.
van Lieshout, P., Ben-David, B., Lipski, M. & Namasivayam, A. K. (2014). The impact of threat and cognitive stress in people who stutter. Journal of Fluency Disorders, 40, 93-109.
Purpose: In the present study, an Emotional Stroop and Classical Stroop task were used to separate the effect of threat content and cognitive stress from the phonetic features of words on motor preparation and execution processes.
van Lieshout, P. & Neufeld, C. (2014). Coupling dynamics interlip coordination in lower lip load compensation. Journal of Speech Language and Hearing Research, 57(2), S597-S615.
Method: A group of 10 people who stutter (PWS) and 10 matched people who do not stutter (PNS) repeated colour names for threat content words and neutral words, as well as for traditional Stroop stimuli. Data collection included speech acoustics and movement data from upper lip and lower lip using 3D EMA.
Results: PWS in both tasks were slower to respond and showed smaller upper lip movement ranges than PNS. For the Emotional Stroop task only, PWS were found to show larger inter-lip phase differences compared to PNS. General threat words were executed with faster lower lip movements (larger range and shorter duration) in both groups, but only PWS showed a change in upper lip movements. For stutter specific threat words, both groups showed a more variable lip coordination pattern, but only PWS showed a delay in reaction time compared to neutral words. Individual stuttered words showed no effects. Both groups showed a classical Stroop interference effect in reaction time but no changes in motor variables.
Conclusion: This study shows differential motor responses in PWS compared to controls for specific threat words. Cognitive stress was not found to affect stuttering individuals differently than controls or that its impact spreads to motor execution processes.
PURPOSE: To study the effects of lower lip loading on lower and upper lip movements and their coordination to test predictions on coupling dynamics derived from studies in limb control.
Neufeld, C. & van Lieshout, P. (2014). Tongue kinematics in palate relative coordinate spaces for electro-magnetic articulography. Journal of the Acoustical Society of America, 135(1), 352-361.
METHOD: Movement data were acquired using Electro-Magnetic Midsagittal Articulography (EMMA) under four conditions: 1) without restrictions serving as a baseline; 2) with a small carrier device attached to the lower lip; 3) with a 50 g weight added to device; and, at the end of the session 4) with weight and device removed. For all conditions, eight participants repeated non-words at two speaking rates. Movement data were used to derive discrete kinematic measures, a cyclic index of spatio-temporal variability, phase deviations and standard deviations of relative phase for inter-lip coupling.
RESULTS: Kinematic variables were not systematically affected by lower lip load. Phase deviations also showed no change, but in contrast, phase variability showed a significant increase for the lower lip load condition at fast rates.
CONCLUSIONS: Lower lip load effects are comparable to the reported impact of homologous limb loading, showing evidence for a tight coupling between both lips in line with predictions from coordination dynamics accounts in the literature.
This paper describes a method for constructing a three-dimensional model of the hard palate using electro-magnetic articulography, and defines two algorithms to derive constriction degree and constriction location values from the trajectories of tongue coils using this model. The kinematics of tongue motion that have been transformed into constriction degree and constriction location values are investigated in detail to determine whether this type of representation obeys the constraints theorized to operate over higher level motor control. Results show that palate-relative coordinate spaces decouple mechanical dependencies present in the tongue, while maintaining low-level kinematic properties. They additionally preserve the 1/3 power law for speed and curvature observed across many motor systems. Finally, it is shown that tongue movements in a palate relative coordinate space more closely correspond to their optimal, jerk-minimized trajectories. These results suggest that this type of coordinate space provides a closer match to higher level motor-planning, in line with production models that specify control units in terms of vocal tract constriction parameters.
Slis, A. & van Lieshout, P. (2013). The effect of phonetic context on speech movements in repetitive speech. J Acoust Soc Am, 134(6), 4496-4507.
This study examined how, in repetitive speech, articulatory movements differ in degree of variability and movement range depending on articulatory constraints manipulated by phonetic context and type of CVC-CVC word pair. These pairs consisted of words that either differed in onset consonants but shared rhymes, or were identical. Articulatory constraints were manipulated by employing different combinations of vowels and consonants. The word pairs were produced in a repetitive speech task at a normal and fast speaking rate. Articulatory movements were measured with 3D electro-magnetic articulography. As measures of variability, median movement ranges and the coefficient of variation of target and non-target articulators were determined. To assess possible biomechanical constraints, correlation values between target and simultaneous non-target articulators were calculated as well. The results revealed that word pairs with different onsets had larger movement ranges than word pairs with identical onsets. In identical word pairs, the coefficient of variation showed higher values in the second than in the first word. This difference was not present in the alternating onset word pairs. For both types of word pairs, higher speaking rates showed higher correlations between target and non-target articulators than lower speaking rates, suggesting stronger biomechanical constraints for the former condition.
Namasivayam, A. K., Le, D. J., Hard, J., Lewis, S. E., Neufeld, C. & van Lieshout, P. (2013). Peripheral auditory tuning for vowels. J Integr Neurosci, 12(4), 461-474.
In this study, 35 young, healthy adults were tested on whether speech-like stimuli evoke a unique response in the auditory efferent system. To this end, descending cortical influences on medial olivocochlear (MOC) activity were indirectly evaluated by studying the effects of contralateral suppression on distortion product otoacoustic emissions (DPOAEs) under four conditions: (a) in the absence of any contralateral noise (Baseline), (b) presence of contralateral broadband noise (Noise Baseline), (c) vowel discrimination-in-noise task (VDN) and (d) tone discrimination-in-noise (TDN) task. A statistically significant release from suppression was evident across all tested DPOAE frequencies (1, 1.5 and 2 kHz) only for the VDN task (p < 0.05), which yielded greater release from suppression than the TDN task. These findings indicate that during active listening in the presence of noise, the MOC activity may be differentially modulated depending on the type of stimulus (vowel vs. tone). Specifically, in the presence of background noise, vowels may show a greater release from suppression in the cochlea than frequency, intensity and duration matched tones.
Namasivayam, A. K., Pukonen, M., Hard, J., Jahnke, R., Kearney, E., Kroll, R. & van Lieshout, P. (2013). Motor speech treatment protocol for developmental motor speech disorders. Dev Neurorehabil.
Objective: This study examines the effect of the Motor Speech Treatment Protocol (MSTP), a multi-sensory hybrid treatment approach on five children (mean: 3;3 years; S.D. 0;1) with severe to profound speech sound disorders with motor speech difficulties.
Neufeld, C., Purcell, D. & van Lieshout, P. (2013) Articulatory compensation to second formant perturbations. Journal of the Acoustical Society of America, 133(5), 3342.
Methods: A multiple probe design, replicated over five participants, was used to evaluate the effects of treatment on improving listeners’ auditory and visual judgements of speech accuracy.
Results: All participants demonstrated significant change between baseline and maintenance conditions, with the exception of KM, who may have had underlying psychosocial, regulation and/or attention difficulties. The training- (practiced in treatment) and test-words (not practiced in treatment) both demonstrated positive change in all participants, indicating generalization of target features to untrained words.
Conclusion: These results provide preliminary evidence that the MSTP, which integrates multi-sensory information and utilizes hierarchical goal selection, may positively impact speech sound production by improving speech motor control in this population.
There is a fast-growing literature examining speakers’ response to real-time alterations of auditory feedback. The majority of these studies examine the response of the subject in acoustic terms. Since many subjects fail to (acoustically) compensate for the perturbation, the current experiment examines whether there are systematic articulatory responses to formant perturbation in the absence of compensation at the level of acoustics. Articulatory data are collected using a 3D electro-magnetic-articulograph. F2 is gradually shifted up or down and preliminary results from three English-speaking subjects showed that two subjects show no response in their acoustics or articulation. However, the remaining speaker who did not show compensation at the level of acoustics displayed a systematic response in some articulatory variables. The acoustic effects of his response were masked because the other articulators behaved in a more variable way, making the second formant vary randomly from trial to trial. Based on these results we expect to see a spectrum of response patterns from a larger population of speakers, from total non-compensation in both acoustics and articulation, partial compensation in articulation, and global articulatory compensation, which induces the appropriate compensation at the level of acoustic output.
Ben-David, B. M., Thayapararajah, A. & van Lieshout, P. (2013). A resource of validated digital audio recordings to assess identification of emotion in spoken language after a brain injury. Brain Injury, 27(2): 248-250.
PRIMARY OBJECTIVE: The ability to identify emotions in spoken language is an essential component of communication and could be disrupted in persons with brain injury. Current tools to assess this function show important shortcomings. The aim is to present a set of validated and linguistically equated lexical sentences that can be used to separate the impact of lexical content and prosody on the processing of emotion in speech in persons with brain injury.
Rudzicz, F., Hirst, G. & van Lieshout, P. (2012). Vocal tract representation in the recognition of cerebral palsied speech. Journal of Speech, Language, and Hearing Research, 55(4), 1190-1207.
METHODS AND PROCEDURES: Using six-point Likert scales, a set of 125 sentences, carefully matched for linguistic variables, were rated by a group of young adults (n = 48) on their suitability to represent a particular emotion (anger, fear, happiness and sadness) in their lexical content.
MAIN OUTCOMES AND RESULTS: The findings identified a set of 50 sentences that were reliably associated with one particular emotion only or no emotion at all (neutral). Using less stringent criteria, 94 sentences were also found to be good representatives for these affective categories.
CONCLUSIONS: The findings generated a robust set of validated lexical stimuli necessary to reliably identify the specific contributions of verbal and prosodic information on difficulties in identifying emotions in speech with persons with brain injury.
In this study, the authors explored articulatory information as a means of improving the recognition of dysarthric speech by machine. Data were derived chiefly from the TORGO database of dysarthric articulation (Rudzicz, Namasivayam, & Wolff, 2011) in which motions of various points in the vocal tract are measured during speech. In the 1st experiment, the authors provided a baseline model indicating a relatively low performance with traditional automatic speech recognition (ASR) using only acoustic data from dysarthric individuals. In the 2nd experiment, the authors used various measures of entropy (statistical disorder) to determine whether characteristics of dysarthric articulation can reduce uncertainty in features of dysarthric acoustics. These findings led to the 3rd experiment, in which recorded dysarthric articulation was directly encoded into the speech recognition process. The authors found that 18.3% of the statistical disorder in the acoustics of speakers with dysarthria can be removed if articulatory parameters are known. Using articulatory models reduces phoneme recognition errors relatively by up to 6% for speakers with dysarthria in speaker-dependent systems. Articulatory knowledge is useful in reducing rates of error in ASR for speakers with dysarthria and in reducing statistical uncertainty of their acoustic signals. These findings may help to guide clinical decisions related to the use of ASR in the future.
Steele, C. M., van Lieshout, P. & Pelletier, C. A. (2012). The influence of stimulus taste and chemesthesis on tongue movement timing in swallowing. Journal of Speech, Language, and Hearing Research, 55(1), 262-275.
To explore the influence of taste and trigeminal irritation (chemesthesis) on durational aspects of tongue movement in liquid swallowing, controlling for the influence of perceived taste intensity. Electromagnetic midsagittal articulography was used to trace tongue movements during discrete liquid swallowing with 5 liquids: water, 3 moderate concentration tastants without odor (sweet, sour, sweet-sour), and a high concentration of citric acid (sour taste plus chemesthesis). Participants were 33 healthy adults in 2 gender-balanced, age-stratified groups (under/over 50). Perceived taste intensity was measured using the Generalized Labeled Magnitude Scale (Bartoshuk, 2000; Bartoshuk et al., 2004). Tongue movement sequencing and durations of the composite tongue movement envelope and component events (rise phase, location of first movement peak, release phase) were calculated. No obligate sequence of tongue segment movement was observed. Overall durations and the timing of the first movement peak were significantly longer with water than with the moderate concentration of sweet-sour liquid. Perceived taste intensity did not modulate stimulus effects in a significant way. The expected pattern of shorter movement durations with the high concentration of citric acid was not seen. A chemesthetic-taste stimulus of high citric acid did not influence the durations of tongue movements compared with those seen during the swallowing of moderate concentration tastants and water.
Bose, A. & van Lieshout, P. (2012). Speech-like and non-speech lip kinematics and coordination in aphasia. International Journal of Language & Communication Disorders, 47(6), 654-672.
Background: In addition to the well-known linguistic processing impairments in aphasia, oro-motor skills and articulatory implementation of speech segments are reported to be compromised to some degree in most types of aphasia.
Namasivayam, A. K. & van Lieshout, P. (2011). Speech motor skill and stuttering. Journal of Motor Behavior, 43(6), 477-489.
Aims: This study aimed to identify differences in the characteristics and coordination of lip movements in the production of a bilabial closure gesture between speech-like and non-speech tasks in individuals with aphasia and healthy control subjects.
Methods & Procedures: Upper and lower lip movement data were collected for a speech-like and a non-speech task using an AG 100 EMMA system from five individuals with aphasia and five age- and gender-matched control subjects. Each task was produced at two rate conditions (normal and fast), and in a familiar and a less familiar manner. Single articulator kinematic parameters (peak velocity, amplitude, duration and cyclic spatio-temporal index) and multi-articulator coordination indices (average relative phase and variability of relative phase) were measured to characterize lip movements.
Outcomes & Results: The results showed that when the two lips had similar task goals (bilabial closure) in speech-like and non-speech task, kinematic and coordination characteristics were not found to be different. However, when changes in rate were imposed on the bilabial gesture, only speech-like task showed functional adaptations, indicated by a greater decrease in amplitude and duration at fast rates. In terms of group differences, individuals with aphasia showed smaller amplitudes and longer movement durations for upper lip, higher spatio-temporal variability for both lips, and higher variability in lip coordination than the control speakers. Rate was an important factor in distinguishing the two groups, and individuals with aphasia were limited in implementing the rate changes.
Conclusions & Implications: The findings support the notion of subtle but robust differences in motor control characteristics between individuals with aphasia and the control participants, even in the context of producing bilabial closing gestures for a relatively simple speech-like task. The findings also highlight the functional differences between speech-like and non-speech tasks, despite a common movement coordination goal for bilabial closure.
The authors review converging lines of evidence from behavioral, kinematic, and neuroimaging data that point to limitations in speech motor skills in people who stutter (PWS). From their review, they conclude that PWS differ from those who do not in terms of their ability to improve with practice and retain practiced changes in the long term, and that they are less efficient and less flexible in their adaptation to lower (motor) and higher (cognitive-linguistic) order requirements that impact on speech motor functions. These findings in general provide empirical support for the position that PWS may occupy the low end of the speech motor skill continuum as argued in the Speech Motor Skills approach (Van Lieshout, Hulstijn, & Peters, 2004).
van Lieshout, P., Steele, C. M. & Lang, A. E. (2011). Tongue control for swallowing in Parkinson’s disease: Effects of age, rate, and stimulus consistency. Movement Disorders, 26(9), 1725-1763.
Patients with Parkinson’s disease often suffer from swallowing problems, especially at more advanced stages of the disease. Efficient swallows require well-coordinated tongue movements during bolus flow, but little is known about such movements in Parkinson’s disease. The current study presents data on tongue movements for patients with mild to moderate Parkinson’s disease (n=10), age-matched adults (n=13), and younger healthy adults (n=15). Participants with Parkinson’s disease showed smaller and more variable movements in the horizontal movement plane, indicating that tongue movements are affected in early stages of Parkinson’s disease. The small and more variable movements in the horizontal plane of Patients with Parkinson’s disease may pose challenges for swallowing liquids efficiently and safely.
Ben-David, B. M., Nguyen, L. L. T. & van Lieshout, P. (2011). Stroop effects in persons with traumatic brain injury: Selective attention, speed of processing, or color-naming? A meta-analysis. Journal of the International Neuropsychological Society, 17(2), 354-363.
The color word Stroop test is the most common tool used to assess selective attention in persons with traumatic brain injury (TBI). A larger Stroop effect for TBI patients, as compared to controls, is generally interpreted as reflecting a decrease in selective attention. Alternatively, it has been suggested that this increase in Stroop effects is influenced by group differences in generalized speed of processing (SOP). The current study describes an overview and meta-analysis of 10 studies, where persons with TBI (N = 324) were compared to matched controls (N = 501) on the Stroop task. The findings confirmed that Stroop interference was significantly larger for TBI groups (p = .008). However, these differences may be strongly biased by TBI-related slowdown in generalized SOP (r² = .81 in a Brinley analysis). We also found that TBI-related changes in sensory processing may affect group differences. Mainly, a TBI-related increase in the latency difference between reading and naming the font color of a color-neutral word (r² = .96) was linked to Stroop effects. Our results suggest that, in using Stroop, it seems prudent to control for both sensory factors and SOP to differentiate potential changes in selective attention from other changes following TBI.
Ben-David, B. M., van Lieshout, P. & Leszcz, T. (2011). A resource of validated affective and neutral sentences to assess identification of emotion in spoken language after a brain injury. Brain Injury, 25(2), 206-220.
Primary objective: The ability to identify emotions in spoken language is an essential component of communication and could be disrupted in persons with brain injury. Current tools to assess this function show important shortcomings. The aim is to present a set of validated and linguistically equated lexical sentences that can be used to separate the impact of lexical content and prosody on the processing of emotion in speech in persons with brain injury.
Namasivayam, A. K., van Lieshout, P., McIlroy, W. E. & Nil, L. D. (2009). Sensory feedback dependence hypothesis in persons who stutter. Human Movement Science, 28(6), 688-707.
Methods and procedures: Using six-point Likert scales, a set of 125 sentences, carefully matched for linguistic variables, were rated by a group of young adults ( n = 48) on their suitability to represent a particular emotion (anger, fear, happiness and sadness) in their lexical content.
Main outcomes and results: The findings identified a set of 50 sentences that were reliably associated with one particular emotion only or no emotion at all (neutral). Using less stringent criteria, 94 sentences were also found to be good representatives for these affective categories.
Conclusions: The findings generated a robust set of validated lexical stimuli necessary to reliably identify the specific contributions of verbal and prosodic information on difficulties in identifying emotions in speech with persons with brain injury.
The present study investigated the role of sensory feedback (auditory, proprioception, and tactile) at the intra- and inter-gestural levels of speech motor coordination in normal and fast speech rate conditions in two groups: (1) persons who stutter (PWS) and (2) those who do not (PNS). Feedback perturbations were carried out with the use of masking noise (auditory), tendon vibration (proprioception), and nonwords that differed in the amount of required tactile lip contact (/api/+tactile and /awi/−tactile). Comparisons were also made between jaw-free and jaw-immobilized (with a bite-block) task conditions. It was hypothesized that if PWS depend more strongly on sensory feedback control during speech production, they would show an increase in variability of movement coordination in the combined presence of fast speech rates and feedback perturbations, in particular, when jaw motions are blocked and adaptations in the other articulators are required to achieve the task goals.
Steele, C. M. & van Lieshout, P. (2009). Tongue movements during water swallowing in healthy young and older adults. Journal of Speech, Language, and Hearing Research, 52(5), 1255-1267.
Significant feedback perturbation effects were found for both groups, but the only significant between-group effect was found at fast speech rates in the jaw-free condition, showing that control speakers were more perturbed at the intra-gestural level of coordination than PWS when simultaneous (auditory, proprioceptive, and tactile) perturbations were present. The findings do not provide support for either the feedback dependency or the sensory deficit hypotheses described in the literature to explain movement characteristics found in fluent speech production of PWS.
The purpose of this study was to explore the nature and extent of variability in tongue movement during healthy swallowing as a function of aging and gender. In addition, changes were quantified in healthy tongue movements in response to specific differences in the nature of the swallowing task (discrete vs. sequential swallows). Electromagnetic midsagittal articulography (EMMA) was used to study the swallowing-related movements of markers located in midline on the anterior (blade), middle (body), and posterior (dorsum) tongue in a sample of 34 healthy adults in 2 age groups (under vs. over 50 years of age). Participants performed a series of reiterated water swallows, in either a discrete or a sequential manner. This study shows that age-related changes in tongue movements during swallowing are restricted to the domain of movement duration. The authors confirm that different tongue regions can be selectively modulated during swallowing tasks and that both functional and anatomical constraints influence the manner in which tongue movement modulation occurs. Sequential swallowing, in comparison to discrete swallowing, elicits simplification or down-scaling of several kinematic parameters. The data illustrate task-specific stereotyped patterns of tongue movement in swallowing, which are robust to the effects of healthy aging in all aspects other than movement duration.
Slis, A. & van Lieshout, P. (2009). Separating normal variation in movement amplitudes from gradient speech errors. Canadian Acoustics, 37(3), 196-197.
A study was conducted to investigate articulatory constraints and its influence on the occurrence of gestural intrusion and reduction errors. Speech errors were generated by a repetitive speech task and two speech rates were employed. These speech rates included normal and fast that were individually determined for each participants and controlled by metronome presentation. Movement data were recorded with the 3D EMA system and the raw movement amplitudes were normalized such that the maximum amplitude of a constriction per trial was set to 100% and the minimum constriction was set to 0%. Data were presented for the bisyllable topcop in the normal speaking rate condition for 8 participants participating in the study. Correlations were also calculated between the normalized amplitude of the target gesture for each separate trial of the study.