Speech is special?

Beginning in the 1950s, with the publication of Chomsky’s Aspects of the theory of syntax, linguistics has distinguished itself from philology and more importantly behavioral psychology by appealing to the notion that speech is special. This idea proposes that speech (and language) is different from other aspects of cognition like attention and memory in that humans uniquely possess it and it lives in a particular part of the brain that other animals do not have.

With regard to speech (and not necessarily language) in this equation, there are two ways to approach the speech-is-special ethos:

  1. Strong version–The perception of speech involves mechanisms that are different from the perceptual mechanisms used for other sounds (general auditory perception). The brain responds differently for speech than it does to a car honking. Evidence for this might be the timing of neural oscillations discussed earlier (different levels of neural firing timed to capture various aspects of the speech signal like voicing or POA).
  2. Weak version–While the brain is tuned for speech, the tuning is built on the same perceptual “machinery” that supports general audition. [The details of this seem vague to me!]

Word deafness and Auditory Agnosia

The book points to the phenomenon of pure word deafness as evidence that speech is special (in some form). The case of FO (not fundamental frequency!) is interesting because they had no difficulty speaking or hearing and were able to discriminate auditorily presented vowels. When FO was asked to discriminate minimal pairs that differed in consonants, she failed. FO also had difficulty discriminating words in a picture-naming task (point to the “bat”, but the screen has a bat, mat, cat, etc.), but she was able to point to a “table” when the screen had other pieces of furniture pictured. This is called “pure-word deafness”. [Wait, maybe she can’t discriminate consonants in contrast?]

Pure-word deafness is contrasted with auditory agnosia, which is very similar except it’s not words that are affected but general sounds. That is, patients with auditory agnosia cannot identify common sounds like dogs barking or cars honking, but have no problem doing language tasks like discriminating words differing in one consonant like FO did.

So, if we put together evidence from pure-word deafness and auditory agnosia we have what’s called a double dissociation. [The book has a nice table.] This double dissociation shows that speech is somehow different from general audition becuase you can have deficits that affect one and not the other.

How is speech special?

We might now ask what it is about speech that sets it apart from other types of sounds? The best answer is that speech is both perceived and produced. That is, we can all (as English speakers) produce and perceive the difference between “pa” and “ba”, but we can’t produce (with our vocal apparatus) the sounds of two different car doors slamming, and we may not be as good as discriminating them. So perhaps it’s speech production aids speech perception and the motor/articulatory system is central to the idea that speech is special.

Motor evidence

Pulvermuller et al. (2006): fMRI task, subjects produce CVs differing in consonant POA–> motor cortex active in a spatially segregated way according to POA (lip area active during bilabials, tongue during alveolars, etc.). In speech perception, the same motor areas are active according to POA!

Möttönen and Watkins (2009): rTMS study where the motor cortex is temporarily inhibited. Will paralyzing the motor cortex affect speech perception? –> YES! Though not complete, there was a decline in discrimination performance when lip region was targetted for pa/ba-ta/da, but not other phonemic pairs.

But wait, there are cases where speech perception remains intact while speech production is affected after some brain injury or assault, namely Broca’s aphasia. In Rogalsky and Hicock’s (2011) study they describe five cases where patients with leisons (damage) on the motor cortex are successful at speech perception but have severely impaired production.

The takeaway is that the motor system is definitely engages during speech perception but is not necessary for the task.

Motor theory of speech perception

Much of this literature relates to a decades old question in the literature, namely, what are the “objects” of speech perception. Are they acoustic? Are they articulatory? That is, when you perceive me saying “pa” are you recovering the phoneme /p/ simply from the acoustics (not so simple) or something else? Proponents of the Motor theory of speech perception (see Liberman 1950s) would argue that you’re actually perceiving the articulation of /p/. This got a lot of pushback in the 70s and 80s and a revised version of the theory suggests that listeners recover the neural program used to initiate the /p/ when they perceive “pa”. It’s looking like there is a lot of evidence supporting this, but it’s not the whole story.