Neuroscience
Inner words spoken in silence
As the the words fall from your lips, it's the first you've heard of them. That is, you don't have a sneak preview of what your own words sound like before you utter them. That's according to Falk Huettig and Robert Hartsuiker who say their finding has implications for our understanding of the brain's internal monitoring processes.
The researchers took advantage of an established effect whereby the sound of a spoken word draws our eyes automatically towards written words that sound similar. Forty-eight Dutch-speaking undergrads were presented with a succession of line drawings, each of which appeared alongside three written words. The participants' task was to name out loud the objects in the drawings. Meanwhile the researchers monitored their eye movements.
On each trial, one of the written words sounded like the name of the drawn object - for example, for a drawing of a heart ('hart' in Dutch), the accompanying words were: harp (also 'harp' in English), zetel ('couch') and raam ('window'). As expected, after saying the word 'hart', the participants eyes were drawn to the word 'harp'. The key question was whether this happened earlier than in previous studies in which participants heard the target words spoken by someone else rather than by themselves. If we hear our own speech internally, before we utter it, then the participants' eyes should have been drawn to the similar sounding words earlier than if they'd heard another person's utterances.
In fact, the participants' eyes were drawn to the similar sounding words with a latency (around 300ms) that suggested they'd only heard their own utterances once they were public. There was no sneak internal perceptual preview.
It's important to clarify: we definitely do monitor our speech internally. For example, speakers can detect their speech errors even when their vocal utterances are masked by noise. What this new research suggests is that this internal monitoring isn't done perceptually - we don't 'hear' a pre-release copy of our own utterances. What's the alternative? Huettig and Hartsuiker said error-checking is somehow built into the speech production system, but they admit: 'there are presently no elaborated theories of [this] alternative viewpoint.'
_________________________________
Huettig, F., & Hartsuiker, R. (2010). Listening to yourself is like listening to others: External, but not internal, verbal self-monitoring is based on speech perception Language and Cognitive Processes, 25 (3), 347-374 DOI: 10.1080/01690960903046926Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
-
Want To Remember Something? Draw It
If you've got some revision to do, get yourself a sketch pad and start drawing out the words or concepts that you want to remember. That's the clear message from a series of studies in the Quarterly Journal of Experimental Psychology that demonstrates...
-
Foreign Subtitles Can Help Comprehension Of A Second Language In A Regional Accent
My recent efforts at speaking French whilst in the French-speaking part of Switzerland mostly provoked derisory laughter from the natives, so I know all about difficulties with accent and pronunciation. According to a new study, I could benefit from watching...
-
Second Language Changes The Way Bilinguals Read In Their Native Tongue
Do bilinguals have an internal switch that stops their two languages from interfering with each other, or are both languages always "on"? The fact that bilinguals aren't forever spurting out words from the wrong language implies there's some kind...
-
Improve Your Memory: Wiggle Your Eyes Back And Forth
Moving your eyes from side to side can help improve the accuracy of your memory. That's according to psychologists Andrew Parker and Neil Dagnall, who say the beneficial effect could be related to sideways eye movements increasing interactive neural...
-
Conversational Computers
Great article in the June Scientific American about efforts at IBM to create normal-sounding synthetic speech. They call it "concatenative speech synthesis" and it builds on phonemes required for particular words. The article is available for Hampshire...
Neuroscience