Ludwig-Maximilians-Universität München
print

Language Selection

Breadcrumb Navigation


Content

Phonetics

Sound check

München, 06/01/2012

Phonetician Jonathan Harrington uses technology to measure with scientific precision how individuals form and perceive sounds. The results suggest possible models of how languages evolve over time.

“Her accent has become more demotic,” says Jonathan Harrington about the Queen. Foto: AP/Alastair Grant/PA

In December 2006, one of Britain’s infamous tabloids appeared with the provocative headline “Does the Queen Speak Cockney?”, suggesting that the Head of the Commonwealth might have picked up a working-class accent. The story behind the headline also caught the attention of upmarket media like the Times and the BBC. And the man behind the story was a respected researcher – charming, witty and with the native British gift for understatement.

“Does the Queen Speak Cockney?”
Professor Jonathan Harrington comes from the southeast of England, and some of his ancestors were German. He studied Linguistics at Cambridge University, then moved to Australia, where he began to study sound change. All philologists encounter this topic in their introductory university courses. For example, students of German learn that a strong verb like biegen (to bend) belongs to “the second ablaut sequence” and takes the forms biegen, biuge, bouc, bugen, gebogen (Infinitive, Imperative, Past Indicative Singular and Plural, Past Participle) in Middle High German. These in turn derive from their Old High German equivalents biogan, biugu, boug, bugum, gibogan. These apparently arbitrary differences pose a fascinating question: how and why do languages change with time? Indeed, this is such an important question that the European Research Council has given Jonathan Harrington a grant of 2 million euros to search for an answer.

Philologists have been attacking the problem for the past two centuries, but Harrington, who is Professor of Phonetics and Digital Speech Processing at LMU, has come up with a new approach. He plans to use instrumental techniques borrowed from the natural sciences to quantify various aspects of speech production and perception, and thus quantify the process of sound change. This approach has the great advantage of providing empirical data which can be used to verify or reject competing theoretical models.

Sound change occurs very slowly. A significant shift in individual vowels, like that from bugum via bugen to the current form bogen (of the verb biegen) takes many centuries. Harrington intends to probe the possibility of a link between the historical evolution of spoken language and the process of language acquisition in childhood. The underlying thesis is that typical errors in speech production and perception that young children make eventually become fixed in the everyday speech of a language community. Moreover, it should be possible to trace changes in usage in the speech of individuals. To do so, Harrington needed speakers from areas in which sound change is known to have occurred, whose language use could be assessed over a long period.

It was known from earlier research that sound change had taken place in the standard accent of English, known as “Received Pronunciation”, over the past 50 years. To follow the process over time, Harrington had the bright idea of studying recordings of the Queen’s annual Christmas broadcasts, which are archived by the BBC. To his surprise, Buckingham Palace responded positively, and within three weeks, to his request for permission to perform a phonetic analysis on this material.

Harrington analyzed vowel sounds from the recordings, focusing particularly on the quality of the u. In the upper-class speech of pre-war England (also adopted by the BBC), the accented vowels in the sentence Lucy threw the balloon to Sue were sounded as long u. Today, it generally sounds more like Lücy thrü the ballün to Sü. Standard pronunciation of long u has shifted toward the yu sound that occurs in a dialect known as Estuary English which is similar to Received Pronunciation but with some influences from a London accent. Harrington compared the Queen‘s pronunciation of this and other vowels in her early and more recent broadcasts with those used by present-day female newsreaders. The analysis revealed that, over the years, the Queen‘s vowels had shifted towards a more modern form of the Standard English accent. Hence, the shift is also detectable in the Queen’s speech, which now sounds less aristocratic than it once did. “Her accent has become more demotic,” says Harrington. So does the Queen now speak Cockney? “The headline is of course nonsense,” Harrington says. Nevertheless, the shift in the Queen’s pronunciation does provide an important clue to an understanding of the evolution of spoken language.

Sound shifts occur slowly, unconsciously and uncontrollably. And they are highly context-dependent. For one thing, they are influenced by the speaker’s social context. The fact that the Queen’s speech now sounds less “posh” is certainly due in part to social transformations in the 20th century, Harrington says. But change is also a function of phonetic context. “Take the a in man; most people hear no difference between this and the a in bad.

But the two vowel sounds are acoustically quite distinct. To pronounce the a in man, unlike the a in bad, one must lower the velum to let some of the air out through the nose. Thus, the a in man is nasalized because it is influenced by the surrounding nasal consonants.” Harrington can be sure that this is the case, because he makes use of an articulograph to support what he hears. With the aid of a magnetic coil and sensors placed on the tongue, lips and jaw, this instrument accurately records the movements of the organs of speech as they form and emit sounds. As a complementary approach, he also carries out perceptual tests on his experimental subjects.

We do not perceive the nasalized a in man as such, because we have learned that its nasalized character is not intrinsic to the vowel, but results from its consonantal context. We subtract the nasalisation from the vowel, and therefore usually don’t hear the difference between the a sounds in man and bad, although the two are acoustically quite different. But if this is so, how do we manage to distinguish and understand the wide range of sounds in natural language? When one considers the diversity of sounds we use, and how they vary depending on factors such as age, gender, dialect, social position and other parameters, it becomes clear why it is so difficult to design effective software for speech recognition. Unlike computers, the human brain is capable of perceiving the a sounds in man and bad as the same. “It essentially removes the contributions of the temporal overlap of the nasals m and n with the vowel sound,” Harrington explains, “and this is why the a in man and the a in bad are heard as the same, despite the fact that one is a nasal sound and the other oral.”

Speech is a very complicated process, but then so is comprehension. It’s not surprising that errors should often occur. Indeed, Harrington suspects that errors are the root cause of sound change and are ultimately responsible for the unceasing evolution of languages. The a in the Latin word for hand, manus, is an oral sound, despite its position between nasal consonants. At some stage though, some speakers of varieties of Latin began to ignore this context. “They ceased to filter it out at a perceptual level,” as Harrington puts it. And because speech organs are lazy and like to drop sounds, the n was eroded and the a nasalized, giving rise to the French equivalent main. However, one should not conclude from this example that every a in found in the context of a nasal consonant like m or n will one day be converted into a nasalized vowel. If that were the case, the trajectory of this transformation should be the same in all languages but, as Harrington emphasizes, this is not the case.

Sound shifts are in part arbitrary, and are influenced by interactions with other speakers or by contact with other languages. As Harrington insists, sound change is organic, cognitively determined and unpredictable, but nevertheless follows comprehensible laws. To explore these laws, he goes back to one of their possible sources in the process of language acquisition during childhood. “Children have to learn to compensate for variation. That is why the sound overlaps in children’s speech are much greater than in adults – indeed so much so that they cannot be perceptually filtered, simply because language is infinitely variable. Harrington therefore suspects that sound changes originate during language acquisition. We learn language only by making mistakes, and in doing so, we drive the incessant evolution of all living languages. By Maximilian G. Burkhart / Translation: Paul Hardy


Prof. Dr. Jonathan Harrington holds the Chair of Phonetics and Digital Speech Processing and is Director of the Institute of Phonetics and Speech Processing at LMU. Born in 1958, Harrington studied at Downing College, Cambridge University, obtaining his doctorate in Linguistics in 1986. He went on to teach and conduct research at Edinburgh University and Macquarie University in Sydney. In 2002 he accepted a professorship in Phonetics and Digital Speech Processing at the University of Kiel, before moving to Munich in 2006. In 2011 Harrington received an Advanced Investigator’s Grant from the European Research Council (ERC).

erc_banner_en