Everipedia Logo
Everipedia is now IQ.wiki - Join the IQ Brainlist and our Discord for early access to editing on the new platform and to participate in the beta testing.


Speech is human vocal communication using language. Each language uses phonetic combinations of vowel and consonant sounds that form the sound of its words (that is, all English words sound different from all French words, even if they are the same word, e.g., "role" or "hotel"), and using those words in their semantic character as words in the lexicon of a language according to the syntactic constraints that govern lexical words' function in a sentence. In speaking, speakers perform many different intentional speech acts, e.g., informing, declaring, asking, persuading, directing, and can use enunciation, intonation, degrees of loudness, tempo, and other non-representational or paralinguistic aspects of vocalization to convey meaning. In their speech speakers also unintentionally communicate many aspects of their social position such as sex, age, place of origin (through accent), physical states (alertness and sleepiness, vigor or weakness, health or illness), psychic states (emotions or moods), physico-psychic states (sobriety or drunkenness, normal consciousness and trance states), education or experience, and the like.

Although people ordinarily use speech in dealing with other persons (or animals), when people swear they do not always mean to communicate anything to anyone, and sometimes in expressing urgent emotions or desires they use speech as a quasi-magical cause, as when they encourage a player in a game to do or warn them not to do something. There are also many situations in which people engage in solitary speech. People talk to themselves sometimes in acts that are a development of what some psychologists (e.g., Lev Vygotsky) have maintained is the use in thinking of silent speech in an interior monologue to vivify and organize cognition, sometimes in the momentary adoption of a dual persona as self addressing self as though addressing another person. Solo speech can be used to memorize or to test one's memorization of things, and in prayer or in meditation (e.g., the use of a mantra).

Researchers study many different aspects of speech: speech production and speech perception of the sounds used in a language, speech repetition, speech errors, the ability to map heard spoken words onto the vocalizations needed to recreate them, which plays a key role in children's enlargement of their vocabulary, and what different areas of the human brain, such as Broca's area and Wernicke's area, underlie speech. Speech is the subject of study for linguistics, cognitive science, communication studies, psychology, computer science, speech pathology, otolaryngology, and acoustics. Speech compares with written language,[1] which may differ in its vocabulary, syntax, and phonetics from the spoken language, a situation called diglossia.

The evolutionary origins of speech are unknown and subject to much debate and speculation. While animals also communicate using vocalizations, and trained apes such as Washoe and Kanzi can use simple sign language, no animals' vocalizations are articulated phonemically and syntactically, and do not constitute speech.


Speech production is a multi-step process by which thoughts are generated into spoken utterances.

Production involves the selection of appropriate words and the appropriate form of those words from the lexicon and morphology, and the organization of those words through the syntax.

Then, the phonetic properties of the words are retrieved and the sentence is uttered through the articulations associated with those phonetic properties.[2]

In linguistics (articulatory phonetics), articulation refers to how the tongue, lips, jaw, vocal cords, and other speech organs used to produce sounds are used to make sounds. Speech sounds are categorized by manner of articulation and place of articulation. Place of articulation refers to where the airstream in the mouth is constricted. Manner of articulation refers to the manner in which the speech organs interact, such as how closely the air is restricted, what form of airstream is used (e.g. pulmonic, implosive, ejectives, and clicks), whether or not the vocal cords are vibrating, and whether the nasal cavity is opened to the airstream.[3] The concept is primarily used for the production of consonants, but can be used for vowels in qualities such as voicing and nasalization. For any place of articulation, there may be several manners of articulation, and therefore several homorganic consonants.

Normal human speech is pulmonic, produced with pressure from the lungs, which creates phonation in the glottis in the larynx, which is then modified by the vocal tract and mouth into different vowels and consonants. However humans can pronounce words without the use of the lungs and glottis in alaryngeal speech, of which there are three types: esophageal speech, pharyngeal speech and buccal speech (better known as Donald Duck talk).

Speech errors

Speech production is a complex activity, and as a consequence errors are common, especially in children.

Speech errors come in many forms and are often used to provide evidence to support hypotheses about the nature of speech.[4] As a result, speech errors are often used in the construction of models for language production and child language acquisition. For example, the fact that children often make the error of over-regularizing the -ed past tense suffix in English (e.g. saying 'singed' instead of 'sang') shows that the regular forms are acquired earlier.[5][6] Speech errors associated with certain kinds of aphasia have been used to map certain components of speech onto the brain and see the relation between different aspects of production: for example, the difficulty of expressive aphasia patients in producing regular past-tense verbs, but not irregulars like 'sing-sang' has been used to demonstrate that regular inflected forms of a word are not individually stored in the lexicon, but produced from affixation of the base form.[7]


Speech perception refers to the processes by which humans can interpret and understand the sounds used in language.

The study of speech perception is closely linked to the fields of phonetics and phonology in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how listeners recognize speech sounds and use this information to understand spoken language. Research into speech perception also has applications in building computer systems that can recognize speech, as well as improving speech recognition for hearing- and language-impaired listeners.[8]

Speech perception is categorical, in that people put the sounds they hear into categories rather than perceiving them as a spectrum. People are more likely to be able to hear differences in sounds across categorical boundaries than within them. A good example of this is voice onset time (VOT). For example, Hebrew speakers, who distinguish voiced /b/ from voiceless /p/, will more easily detect a change in VOT from -10 ( perceived as /b/ ) to 0 ( perceived as /p/ ) than a change in VOT from +10 to +20, or -10 to -20, despite this being an equally large change on the VOT spectrum.[9]


In speech repetition, speech being heard is quickly turned from sensory input into motor instructions needed for its immediate or delayed vocal imitation (in phonological memory). This type of mapping plays a key role in enabling children to expand their spoken vocabulary. Masur (1995) found that how often children repeat novel words versus those they already have in their lexicon is related to the size of their lexicon later on, with young children who repeat more novel words having a larger lexicon later in development. Speech repetition could help facilitate the acquisition of this larger lexicon.[10]

Problems involving speech

There are several organic and psychological factors that can affect speech.

Among these are:

  1. Diseases and disorders of the lungs or the vocal cords, including paralysis, respiratory infections (bronchitis), vocal fold nodules and cancers of the lungs and throat.

  2. Diseases and disorders of the brain, including alogia, aphasias, dysarthria, dystonia and speech processing disorders, where impaired motor planning, nerve transmission, phonological processing or perception of the message (as opposed to the actual sound) leads to poor speech production.

  3. Hearing problems, such as otitis media with effusion, and listening problems, auditory processing disorders, can lead to phonological problems.

  4. Articulatory problems, such as slurred speech, stuttering, lisping, cleft palate, ataxia, or nerve damage leading to problems in articulation. Tourette syndrome and tics can also affect speech. Various congenital and acquired tongue diseases can affect speech as can motor neuron disease.

  5. In addition to dysphasia, anomia and auditory processing disorder can impede the quality of auditory perception, and therefore, expression. Those who are Hard of Hearing or deaf may be considered to fall into this category.

Brain physiology

The classical model

The classical or Wernicke-Geschwind model of the language system in the brain focuses on Broca's area in the inferior prefrontal cortex, and Wernicke's area in the posterior superior temporal gyrus on the dominant hemisphere of the brain (typically the left hemisphere for language). In this model, a linguistic auditory signal is first sent from the auditory cortex to Wernicke's area. The lexicon is accessed in Wernicke's area, and these words are sent via the arcuate fasciculus to Broca's area, where morphology, syntax, and instructions for articulation are generated. This is then sent from Broca's area to the motor cortex for articulation.[11]

Paul Broca identified an approximate region of the brain in 1861 which, when damaged in two of his patients, caused severe deficits in speech production, where his patients were unable to speak beyond a few monosyllabic words. This deficit, known as Broca's or expressive aphasia, is characterized by difficulty in speech production where speech is slow and labored, function words are absent, and syntax is severely impaired, as in telegraphic speech. In expressive aphasia, speech comprehension is generally less affected except in the comprehension of grammatically complex sentences.[12] Wernicke's area is named after Carl Wernicke, who in 1874 proposed a connection between damage to the posterior area of the left superior temporal gyrus and aphasia, as he noted that not all aphasic patients had suffered damage to the prefrontal cortex.[13] Damage to Wernicke's area produces Wernicke's or receptive aphasia, which is characterized by relatively normal syntax and prosody but severe impairment in lexical access, resulting in poor comprehension and nonsensical or jargon speech.[12]

Modern research

Modern models of the neurological systems behind linguistic comprehension and production recognize the importance of Broca's and Wernicke's areas, but are not limited to them nor solely to the left hemisphere.

[14] Instead, multiple streams are involved in speech production and comprehension.

Damage to the left lateral sulcus has been connected with difficulty in processing and producing morphology and syntax, while lexical access and comprehension of irregular forms (e.g. eat-ate) remain unaffected.[15] Moreover, the circuits involved in human speech comprehension dynamically adapt with learning, for example, by becoming more efficient in terms of processing time when listening to familiar messages such as learned verses[16].

See also

  • FOXP2

  • Freedom of speech

  • Imagined speech

  • Index of linguistics articles

  • List of language disorders

  • Spatial hearing loss

  • Speechwriter

  • Talking birds

  • Vocology

  • Public speaking

  • Origin of language


Citation Linkopenlibrary.orgTemplate:American Heritage Dictionary
Sep 30, 2019, 4:48 PM
Citation Link//www.ncbi.nlm.nih.gov/pubmed/10354575Levelt, Willem J. M. (1999). "Models of word production". Trends in Cognitive Sciences. 3 (6): 223–32. doi:10.1016/s1364-6613(99)01319-4. PMID 10354575.
Sep 30, 2019, 4:48 PM
Citation Linkopenlibrary.orgCatford, J.C.; Esling, J.H. (2006). "Articulatory Phonetics". In Brown, Kieth (ed.). Encyclopedia of Language & Linguistics (2nd ed.). Amsteram: Elsevier Science. pp. 425–42.
Sep 30, 2019, 4:48 PM
Citation Linkopenlibrary.orgFromkin, Victoria (1973). "Introduction". Speech Errors as Linguistic Evidence. The Hague: Mouton. pp. 11–46.
Sep 30, 2019, 4:48 PM
Citation Link//doi.org/10.1207%2Fs15516709cog2304_4Plunkett, Kim; Juola, Patrick (1999). "A connectionist model of english past tense and plural morphology". Cognitive Science. 23 (4): 463–90. CiteSeerX doi:10.1207/s15516709cog2304_4.
Sep 30, 2019, 4:48 PM
Citation Link//doi.org/10.1111%2Fj.1467-9922.2010.00628.xNicoladis, Elena; Paradis, Johanne (2012). "Acquiring Regular and Irregular Past Tense Morphemes in English and French: Evidence From Bilingual Children". Language Learning. 62 (1): 170–97. doi:10.1111/j.1467-9922.2010.00628.x.
Sep 30, 2019, 4:48 PM
Citation Link//www.ncbi.nlm.nih.gov/pubmed/15781306Ullman, Michael T.; et al. (2005). "Neural correlates of lexicon and grammar: Evidence from the production, reading, and judgement of inflection in aphasia". Brain and Language. 93 (2): 185–238. doi:10.1016/j.bandl.2004.10.001. PMID 15781306.
Sep 30, 2019, 4:48 PM
Citation Linkopenlibrary.orgKennison, Shelia (2013). Introduction to Language Development. Los Angeles: Sage.
Sep 30, 2019, 4:48 PM
Citation Link//www.ncbi.nlm.nih.gov/pubmed/16411426Kishon-Rabin, Liat; Rotshtein, Shira; Taitelbaum, Riki (2002). "Underlying Mechanism for Categorical Perception: Tone-Onset Time and Voice-Onset Time Evidence of Hebrew Voicing". Journal of Basic and Clinical Physiology and Pharmacology. 13 (2): 117–34. doi:10.1515/jbcpp.2002.13.2.117. PMID 16411426.
Sep 30, 2019, 4:48 PM
Citation Linkopenlibrary.orgMasur, Elise (1995). "Infants' Early Verbal Imitation and Their Later Lexical Development". Merrill-Palmer Quarterly. 41 (3): 286–306.
Sep 30, 2019, 4:48 PM
Citation Linkopenlibrary.orgKertesz, A. (2005). "Wernicke–Geschwind Model". In L. Nadel, Encyclopedia of cognitive science. Hoboken, NJ: Wiley.
Sep 30, 2019, 4:48 PM
Citation Linkopenlibrary.orgHillis, A.E., & Caramazza, A. (2005). "Aphasia". In L. Nadel, Encyclopedia of cognitive science. Hoboken, NJ: Wiley.
Sep 30, 2019, 4:48 PM
Citation Linkopenlibrary.orgWernicke K. (1995). "The aphasia symptom-complex: A psychological study on an anatomical basis (1875)". In Paul Eling (ed.). Reader in the History of Aphasia: From sasi(Franz Gall to). 4. Amsterdam: John Benjamins Pub Co. pp. 69–89. ISBN 978-90-272-1893-3.
Sep 30, 2019, 4:48 PM
Citation Link//www.ncbi.nlm.nih.gov/pubmed/28334963Nakai, Y; Jeong, JW; Brown, EC; Rothermel, R; Kojima, K; Kambara, T; Shah, A; Mittal, S; Sood, S; Asano, E (2017). "Three- and four-dimensional mapping of speech and language in patients with epilepsy". Brain. 140 (5): 1351–70. doi:10.1093/brain/awx051. PMC 5405238. PMID 28334963.
Sep 30, 2019, 4:48 PM
Citation Linkopenlibrary.orgTyler, Lorraine K.; Marslen-Wilson, William (2009). "Fronto-temporal brain systems supporting spoken language comprehension". In Moore, Brian C.J.; Tyler, Lorraine K.; Marslen-Wilson, William D. (eds.). The Perception of Speech: from sound to meaning. Oxford: Oxford University Press. pp. 193–217. ISBN 978-0-19-956131-5.
Sep 30, 2019, 4:48 PM
Citation Link//www.ncbi.nlm.nih.gov/pubmed/30429778Cervantes Constantino, F; Simon, JZ (2018). "Restoration and Efficiency of the Neural Processing of Continuous Speech Are Promoted by Prior Knowledge". Frontiers in Systems Neuroscience. 12 (56): 56. doi:10.3389/fnsys.2018.00056. PMC 6220042. PMID 30429778.
Sep 30, 2019, 4:48 PM
Citation Linkweb.archive.orgAvailable at
Sep 30, 2019, 4:48 PM
Citation Linkwww.youtube.comSpeaking captured by real-time MRI
Sep 30, 2019, 4:48 PM
Citation Linkdoi.org10.1016/s1364-6613(99)01319-4
Sep 30, 2019, 4:48 PM
Citation Linkwww.ncbi.nlm.nih.gov10354575
Sep 30, 2019, 4:48 PM