Take a look at the Recent articles

Structural and functional peculiarities of speech system organization in right-handed subjects revealed using functional magnetic resonance imaging. The co-dominant right hemisphere

Karlov VA

Evdokimov Moscow State University of Medicine and Dentistry, Moscow, Russia

E-mail : bhuvaneswari.bibleraaj@uhsm.nhs.uk

Shklovskiy VM

Center of Speech Pathology and Neurorehabilitation of Moscow Healthcare Department, Russia

Konovalov RN

Neurology research center of RAS, Moscow, Russia

Zolovkina VS

Evdokimov Moscow State University of Medicine and Dentistry, Moscow, Russia

DOI: 10.15761/JBN.1000117

Article
Article Info
Author Info
Figures & Data

Abstract

Nowadays a lot of mechanisms of brain speech processes organization are poorly studied. Functional magnetic resonance imaging (fMRI) gives an opportunity to non-invasive study subdominant right hemisphere speech organization of right-handers’ language function. With the help of our proposed three paradigms we studied in particular speech prosody in healthy volunteers. The first paradigm was aimed at identifying the of sentence intonation characteristics, the second-the definition of accentuated and unaccentuated rhythms and the third-tune recognition. When processing the data, we observed activation of both classical areas of verbal representations, dominated in terms of volume and intensity in the right hemisphere of the brain, and secondary and tertiary areas, in particular supplementary motor area, Brodmann area 6, 9, and the field of Brodmann 46 during testing of the third paradigm. We also noted activation of the cerebellum, the limbic system, and deep subcortical structures of the brain. The results of this study confirm the prevalence of verbal representations in both hemispheres of the brain that allows us to speak about hemispheres co-dominance for this function. These paradigms fully comply with stated objectives and can be used for mapping of prosodic speech characteristics in humans.

Key words

fMRI, prosodic speech, speech paradigm, right hemisphere, hemisphere co-dominance

Introduction

Speech is one of the latest and, correspondingly, phylogenetic new functions of the cerebral hemispheres. It is an inherent feature of humans associated with its own anatomical basis in the form of specific cytoarchitectonic areas.

None of the animals, even anthropoid apes, show verbal logical reasoning. Means of animal communication include numerous “symbolic” elements, namely movements, sounds, postures, however, it lacks abstract notions, words, codes, articulate speech intended to designate the objects of the environment, their relationship or quality. “Animal language” being a generalized symbolism of the transmitted signals is a basis for any communication system. However, the shift from the biological to social plane in human development became a precondition of the emergence of articulate speech in the process of joint human activities. This was well said by F. Engels in his famous quotation: ”It was the labour together with articulate speech that became the two major stimuli inducing the transition from the ape brain to the human brain, which, despite all its similarities with the ape’s one, is far superior in respect of the size and perfectness”, and then “The development of the brain and the feelings obeyed to it, increasingly clarifying mind, the capability of abstraction and reasoning had a inverse impact on labour and language giving both new and new stimuli for further development.” [1]

Speech is also a particularly important mean to convey the emotions. Not only can correct speech intonation convey the meaning of an utterance, but also it is a tool for the strongest emotional impact on an interlocutor. Its constituents are the rate of speech, tone of voice, speech melody, phrase stress and pauses. All the sounds pronounced by humans have their own nature and contents, and the objective of each speaker is to feel them.

There are two major channels in the perception of emotional speech-prosody and semantics [2]. Speech semantics refers to the use of individual words and their parts as means for the expression of words, namely for designation of objects and their categories, phenomena, actions and interactions. Speech is adopted as a socially fixed symbolic system that is based on the analysis and synthesis, generalization and comparison of objects and phenomena of the reality. A human gradually acquires logical operations needed for understanding of the nature of the observed phenomena. Meaningful communication presumes the synthesis of conceptual utterances through the understanding of speech.

The term prosody (from Greek-stress, refrain) refers to a combination of phonetic means of the speech (related to its power, pitch and temporal characteristics) at all levels of speech segmentation, which is responsible for its recognition. The speech segments include a syllable, a word, a collocation, a syntagma, a phrase, a super-phrasal unit, a text.

Speech, both impressive and expressive, is not only a specific form of communication, but also a mechanism of intellectual activity, the mean of organization and regulation of one’s own psychic processes. A syntagma and a collocation are responsible for two interrelated speech processes: the analysis and generalization of information coming from the environment, on one hand, and formation of estimations and conclusions, on the other hand [3].

Higher mental functions in humans are functional systems of social origin that are mediated by certain brain structures and physiological processes. Speech is a part of any comprehensive form of mental activity, and the central role is played by the second signal system connections.

In 1932, I.P. Pavlov defined the speech as “second signals, the signals of the signals coming to the cortex from the speech organs. They represent the distraction from the reality and permit the generalization, which constitutes our personal, specific human higher-order thinking, first creating panhuman empricism, and then, finally, the science-an instrument of higher orientation of a human in the surrounding worlds and his own self”. He characterized the second signal system as a “high-order regulator of human behaviour” [4].

Speech organization of mental processes is a complex integrity of nervous operations and in this context may represent a reflection of the activity of the whole brain as a joint work of a set of analyzers. In the narrow sense, speech processes are a system of sensorimotor coordinations. Speech perception, which is based on the analysis and synthesis of the sound stream elements, is mediated by functioning of the auditory and kinesthetic analyzers. Word articulation is the most complex process representing a system of coordinated articulatory movements (articulates) obtained experimentally and having the functioning of the kinesthetic and auditory analyzers as the afferent basis [5].

Despite the fact that the problem of the neural organization of speech has been studied for more than a century, many aspects of the mechanisms of this form of higher cognitive cortical functions (which is inherent to humans) are still not well understood. Implementation of state-of-the-art neuroimaging techniques such as functional magnetic resonance imaging (fMRI) into clinical practice enables noninvasive studies of normal structural-functional organization of the speech system as well as its pathological neuroplastic re-organization, allowing to predict the outcome of speech-related and other cognitive disorders associated with different types of pathology involving the cerebral hemisphere. Functional MRI is based on the process of increase in a local blood flow rate as a result of increase in the regional brain activity leading to lower levels of deoxyhemoglobin and elevated levels of oxyhemoglobin (BOLD-blood oxygen level-dependent-effects) in the course of performing specific tasks (paradigms) that consist of alternating active and rest conditions. Increase in homogeneity of the magnetic field results in increased intensity of the signal in a series of T2*-weighted images, and its quantitative assessment allows to determine neuronal activation [6].

No complex paradigm has been suggested so far for studying speech organization of the subdominant right hemisphere in respect of a speech function in right-handed subjects, namely the speech prosody.

Materials and methods

Twenty-one subjects of either gender (15 females, 6 males) aged 27 to 65 years (average age-43 years) were enrolled in the study. The subjects did not have any pathological changes in the brain matter according to the results of conventional MRI; they also did not demonstrate any signs of neurological, mental, endocrinopathies or cardiovascular disorders and don’t even have high blood pressure (not more than 140\80). All subjects don't drink alcohol (don’t have alcohol dependence), don't smoke and don’t take systematic any pharmaceutical drugs. They annually undergo a medical examination under medical examination. All enrolled subjects were right-handed, which was confirmed using the Edinburgh Handedness Inventory.

Paradigms

All tasks were demonstrated to the study subjects in advance and rehearsed prior to initiation of the study. Visual and auditory stimuli were applied. The tasks were transmitted from the console room to be displayed on a monitor screen, from which were directly available for perception of a subject examined via the mirror system. Auditory stimuli were presented using headphones.

In our study, three different paradigms with three scan sessions, respectively, were applied. Each paradigm had a block design and consisted of 8 blocks of alternating activation and rest periods. The paradigm duration was 4.05 min (30 seconds per each block).

Each scan session began with a rest period during which the subjects were asked to fix the gaze at a point displayed on the screen. Then it was followed by an activation period consisting of a number of slides with different stimuli for each paradigm.

For the first paradigm, a stimulus of determination of intonational characteristics (interrogative, exclamatory or narrative) was chosen. Russian proverbs and sayings with a hidden undertone were used as variants, e.g. “Let a goat into the garden” (translator’s note-the proverb is similar to “Throw the cat among the pigeons”); “Ride slower and you'll get further” (translator’s note-the proverb is similar to “Slow and steady wins the race”). All auditory stimuli were read by a professional speaker (female) and recorded in a specially equipped audio recording studio. Three variants of intonation expressed as symbols “.”, “?”, “!” were provided as a visual stimulus. The subjects were asked to match the auditory and the visual stimuli by pressing the corresponding button on the console.

In the second paradigm, only the tests designed for determination of the given accented and unaccented rhythms, were used. These rhythms were also recorded using professional audio recording equipment. Like during the first paradigm, the subjects were asked to match the auditory and the visual stimuli.

In the last paradigm, the task was to recognize a melody. The titles of three different songs were displayed on the screen, the music of one of them sounded in the headphones as the auditory stimulus. Taking into account the peculiarities of the study subjects (men and women aged 27 to 65 years), all melodies were taken from the Soviet-era movies. In order to make each task more difficult, fragments of three audio tracks similar in the sound and rhythm were selected with the help of professional musicians.

MR-data were obtained using a MRI scanner 1.5 Т Magnetom Avanto (Siemens, Erlangen, Germany). In order to rule out pathological lesions in the brain matter, the investigation began with the use of a conventional T2-Turbo spin echo imaging mode in the axial view. To get anatomical data, a 3D-T1-weighted gradient-echo imaging mode (T1-mpr) was applied, and a series of 176 sagittal slices involving the whole volume of the bran matter was obtained (repetition time (TR)=1940 ms; echo time (TE)=3.1 ms; slope angle=15 degrees; the matrix=256 x 256; slice thickness=1.0 mm; voxel size=1 x 1 x 1 mm). Then four functional datasets were obtained for each paradigm using T2*-weighted gradient-echo mode in the axial view (repetition time (TR)=3750 ms; echo time (TE)=47 ms; slope angle=90 degrees; the matrix=64 x 64; slice thickness=3.0 mm; voxel size=3 x 3 x 3 mm). Each T2*-weighted mode included 64 measurements of the whole volume of the brain matter.

SPM5 (Welcome Trust Centre of Neuroimaging, London, UK) statistical software package was used for data processing. To correct the subject’s movements, all functional datasets were aligned relative to the first one. Then the middle functional file was linearly co-registered with the corresponding anatomical file followed by space normalization of the first (3 x 3 x 3 mm) and the second (1 x 1 x 1 mm) ones relative to the standard Montreal Neurological Institute (MNI) Coordination System. Prior to statistical data analysis, the blurring of converted functional data was applied using the Gaussian function with the kernel size of 10 x 10 x 10 mm to increase signal-to-noise ratio (due to weakening of high frequency noise) and compensation of intersubject variability in the structure of gyri. Statistical parametric mapping was performed based on voxel-by-voxel comparison using the general linear model [7].

In the course of alignment during the first-level statistical data analysis, the parameters of rigid transformation were introduced as regressors in order to minimize the artifacts resulted from movements of each subject. To detect relevant activation areas, Random Effects Group Analysis with the set value of statistical significance p<0.001 (without correction) was applied.

Results

The data with statistically significant activation (p<0.001; see Table 1) are presented in the results.

Table 1. Activation areas from each paradigm (p<0,001)

Localisation of activation zones

Brodmann area

Coordinates (MNI)

x

Y

z

First paradigm (to determination of intonational characteristics)

Right Cerebrum, superior temporal gyrus, Wernicke's area

22

62

-18

0

Right Cerebrum, Parietal Lobe, Sub-Gyral, Gray Matter

 -//-

28

 

-58

38

46

-36

54

Left Cerebellum, Cerebellum Anterior Lobe, Culmen, Cerebellum _6

-//-

-30

-58

-28

-20

-56

-24

Second paradigm (to determination of the given accented and unaccented rhythms)

Left Cerebrum, Occipital Lobe

18

-28

-98

-2

Right Cerebrum, Frontal Lobe, Middle Frontal Gyrus

9

42

14

34

Right Cerebrum, Frontal Lobe, Middle Frontal Gyrus

6

42

4

58

Right Cerebrum, Frontal Lobe, Inferior Frontal Gyrus

-//-

44

26

-2

Left Cerebellum, Cerebellum Posterior Lobe, Cerebellum _6

-//-

-28

-62

-28

Right Cerebrum, Frontal Lobe, Medial Frontal Gyrus, Superior motor area

-//-

4

16

48

Right Cerebrum, Frontal Lobe, Superior Frontal Gyrus, Superior motor area

-//-

6

24

58

Left Cerebrum, Frontal Lobe, Medial Frontal Gyrus, Superior motor area

-//-

-4

10

50

Left Cerebrum, Sub-Lobar, Thalamus, Ventral Posterior Lateral Nucleus

-//-

-16

-18

6

Left Cerebrum, Sub-Lobar, Extra-Nuclear

-//-

-18 -

 

2

14

Third paradigm (to recognize a melody)

Left Cerebrum, Occipital Lobe

18

38

-94

-6

Мозжечок LB/ Uvula/ Vermis 8

-//-

0

-66

-36

Right Cerebrum, Occipital Lobe, Fusiform Gyrus

-//-

42

-70

-20

Left Cerebrum, Frontal Lobe, Middle Frontal Gyrus

46

-48

20

28

Left Cerebrum, Sub-Lobar, Insula

-//-

-34

20

2

Left Cerebrum, Parietal Lobe, Superior Parietal Lobe

7

-22

-66

44

Right Cerebrum, Frontal Lobe, Medial Frontal Gyrus, Supp Motor_Area

-//-

8

20

48

Left Cerebrum, Frontal Lobe, Medial Frontal Gyrus, Supp Motor_Area

-//-

-6

10

52

Right Cerebrum, Frontal Lobe, Superior Frontal Gyrus, Supp Motor_Area

-//-

6

12

60

Right Cerebrum, Midbrain

-//-

8

-22

-16

Right Cerebrum, Pons

-//-

6

-34

-28

Right Cerebrum, Parietal Lobe, Sub-Gyral

-//-

26

-58

38

Right Cerebrum, Parietal Lobe, Superior Parietal Lobule

-//-

28

-60

56

Right Cerebrum, Parietal Lobe, Inferior Parietal Lobule

40

40

-48

40

When applying the first paradigm directed to identification of the intonational characteristics of speech, bilateral activation of the Wernicke's speech (Brodmann area 22) was found being of significantly greater volume and intensity in the right cerebral hemisphere (Figures 1, 2).

Figure 1. Activation of the Wernicke's speech (Brodmann area 22) in the right hemisphere in the first paradigm directed to identification of the intonational characteristics of speech

Figure 2. The same. 3D image of the lateral surface of the brain

When applying the second and the third paradigm (rhythm determination and melody recognition), bilateral areas of activation of the visual cortex was observed being more pronounced on the right (according to right laterality)-lateral occipito-temporal region; Broadmann area 18 on the left (which is normal for right-handed subjects)-cortical area of the visual analyzer, written speech perception area, secondary area (primary cortical area 17). It is noteworthy that the region demonstrating the most intense activation during the performance of the tasks of the second paradigm (where the rhythm was presented as symbols), as well as the third one (melody recognition) was the occipital cortex of the left hemisphere (Figures 3A, B).

Figure 3. Predominantly left-sided activation of the Broadmann area 18 when performing: A-second paradigm, B-third paradigm

In the first paradigm, both constituents-visual-spatial and symbolic-were equally employed. In addition, activation of an accessory motor area within the cytoarchitectonical defined Broadmann area 6 was observed. It should be noted that in the course of evolution this accessory motor area appears in monkeys becoming a part of the frontal lobes of the brain (Figures 4A, B).

Figure 4. Activation when performing the second paradigm: A-the accessory motor area and B-the Broadmann area 6 in the right hemisphere

In the course of determination of the given accented and unaccented rhythms, significant activation of the Broadmann area 9 is observed in the right hemisphere (Figure 5).

Figure 5. Right-sided activation of the dorsolateral prefrontal cortex tertiary region of the Broadmann area 9 when performing the third paradigm

When implementing the third paradigm (melody recognition), Broadmann area 7 was found to be activated, primarily in the left hemisphere-most likely the projection of the receptors of the upper limb intended for identification of objects by touch. This can be easily explained by the fact that the subjects hold the keyboard for selecting the answer without fixing it with the gaze.

Another center of the first signal system, which was found to be activated in this paradigm was the motor analyzer of head-eye coordination movements (Broadmann area 46) that accepts the impulses from the retinotopic area 17 of the visual cortex (Figure 6). However, the question regarding the reasons of the observed activation of the system primarily directed to melody recognition remains open to discussion. This observation may be related to pre-speech forms of communication.

Figure 6.  Activation of the Broadmann area 46 during melody recognition

When regular people without musical training perceive music, it is primarily emotional perception of the melody signals. In contrast, musicians perceive a melody as “speech” consisting of meaningful elements with their own sense. The emotional component of music perception is provided by the brain structures combined into the Papez circuit, which a part of the limbic system. Here we should also mention the frontal lobes of the cerebral hemispheres and their connections with non-specific brainstem structures that exert both activating and deactivating effects on the cortex, including the speech-related areas (Figure 7A). Consideration of this fact along with the feedback with executive speech organs once resulted in actual extension and deepening of the understanding of the organization of speech and definition of its afferent and efferent forms [8].

Figure 7. Activation of brainstem structures

Figure 8. Activation of the thalamus

Significant activation of the insular lobe in the left cerebral hemisphere was observed during implementation of the third paradigm. In this case it is possible to consider it, in addition to be the region of control of emotions and behavioral reactions, a part of the neuronal system that connects the supramarginal gyrus and the Broca’s area that thus participates in phonetic speech planning together with the premotor cortex.

It is noteworthy that the cerebellum was found to be activated during all three paradigms, which necessitates further studying.

Discussion

The understanding and comprehension of the function of any area in the human brain should primarily begin with the analysis of the human phylo-and ontogenesis [9, 10]. In our case, it should be noted that the formation and development of speech have significant impact on the transformation of the cortical organization of the human brain, which is best reflected in the enlargement of the peripheral (secondary) fields and the formation of the tertiary nuclear areas. Due to isolation of specific speech-related regions in the peripheral nuclear areas, the neuronal organization of the central areas undergoes significant changes [8].

The frontal cortical r2021 Copyright OAT. All rights reserve most intense activation during rhythm identification and melody recognition. The frontal lobe of the brain includes the primary area-projection motor cortex (Broadmann area 4), the secondary area-premotor cortex (Broadmann areas 6,8) and the tertiary area-prefrontal cortex (Broadmann areas 9, 10, 11, 44, 45, 46, 47).

Differentiation into separate areas in the frontal regions is observed in the lower primates, is found to be significantly developed in primates and finally reaches the peak in humans who develop a number of new areas. This may be due to the fact that in the course of evolution, the communication between people initially originated in the form of body movements (complex kinetic speech), rhythmic gestures (manual kinetic speech), dances accompanying inarticulate intonation sounds. Subsequently, the primary communication system creates the basis for formation of articulate sound speech. Indirect confirmation of these processes may be found during ontogenesis of a child speech [11, 12]. The data of psycholinguistics show that in the course of ontogenesis, a child first develops a “meaningful gesture and emotional phonation” and “meaningful sound” there after [13].

When studying the activation as a result of rhythm determination, we observed significant activation primarily in the accessory motor area in the right hemisphere. This allows to make a conclusion regarding the direct involvement of this area in the organization of the speech rhythm. Activation of the Broadmann area 9 was also registered. The functions of this area include participation in short-term memory, determination of automatic responses, speech fluency, auditory verbal attention, recognition of spatial images, calculation of a series of the auditory stimuli, and, directly in the right hemisphere-formation of the spatial and working memory (the authors). There has been an unexpected finding of involvement of this area in the semantic and perceptive processing of the odors [14]. Let us remind that this tertiary area of the dorsolateral prefrontal cortex was formed during the late stages of phylogenesis and is characterized by the finest structure and connections.

All three paradigms were designed to study the speech prosody, i.e. direct involvement of extralinguistic speech components-speech intonation, rate, pitch and tone. Our results strongly suggest the connection between the prosody (in all tests) and the limbic system at different levels, which is directly related to the emotional constituent of the speech. A separate role is played by the deep brain structures, thalamus in particular, which was found to be most significantly activated during two of three paradigms (for determination of rhythm and intonation) (Figure 7B).

Conclusion

Thus, the presented data confirm the spread of speech-related areas in both cerebral hemispheres, which allows to indicate the codominance of the hemispheres in respect of this function. However, predominant activation of speech-related areas in the right hemisphere was revealed. In addition to conventional speech-related areas, we observed the activation of the cerebellum, the limbic system, the subcortical structures etc., which allows to consider the speech as a coherent brain process.

We suggest that the obtained data may be used for the mapping of prosodic speech characteristics in humans, including during the process of specific career choice. This is a relevant issue in modern neuropsychological studies.

Disclosure

This scientific work was not funded by any Foundation or Institution. Authors have no conflicts of interest.

References

  1. Engel's F, Marks K, Sochineniya (1961) 2-e izd. T. 20. – M.: Gosudarstvennoe izdatel'stvo politicheskoi literatury: 490. (In Russ)
  2. Vingerhoets G, Berckmoes C, Stroobant N. (2003) Cerebral Hemodynamics During Discrimination of Prosodic and Semantic Emotion in Speech Studied by Transcranial Doppler Ultrasonography. Neuropsychology 17: 93-99. [Crossref]
  3. Kaufman DA, Trachenko OP (1985) Opoznanie verbal'nyh stimulov i funkcional'naya asimmetriya mozga. Fiziologiya cheloveka 11: 395-402.
  4. Pavlov IP (1949) Polnoe sobranie trudov t I-VI M: AN SSSR.
  5. SHklovskij ML (1939) Klassifikaciya i differencial'naya diagnostika rasstrojstv sluha i rechi v detskom vozraste. CH 1. Klassifikaciya i diferencial'naya diagnostika gluhonemoty, gluhoty i tugouhosti. Leningr. nauchno-prakticheskij institut sluha i rechi-[Petrozavodsk]: Kargosizdat, [Leningrad]-140.
  6. Filippi M (2009) fMRI techniques and protocols. New York, Humana press, USA 25.
  7. Friston KJ, Holmes AP, Worsley KJ, Poline JP, Frith CD, et al. (1995) Statistical parametric maps in functional imaging: A general linear approach. Hum Brain Mapp 2:189-210.
  8. Luriya AR (2008) Vysshie korkovye funkcii cheloveka. SPb:Piter: 64-71.
  9. Sepp EK (1949) Istoriia razvitiia nervnoi sistemy pozvonochnykh: 424.
  10. Karlov VA (2002) Detskaia nevrologiia kak instrument poznaniia razvivaiushchegosia mozga ZHurnal nevrologii i psikhiatrii im S S Korsakova 4: 5-6.
  11. Badalyan LO (2001) Detskaya nevrologiya.
  12. Luriya AR, YUdovich F, YA (1956) Rech' i razvitie psihicheskih processov u rebenka.
  13. Isenina EI (1983) Psiholingvisticheskie zakonomernosti rechevogo ontogeneza (doslovesnyĭ period). Ivanovo, Russia.
  14. Royet JP, Koenig O, Gregoire MC, Cinotti L, Lavenne F et al.  (1999) Functional anatomy of perceptual and semantic processing for odors. J Cogn Neurosci 11: 94-109. [Crossref]

Article Type

Research Article

Publication history

Received date: September 28, 2017
Accepted date: October 24, 2017
Published date: October 27, 2017

Copyright

© 2017 Karlov VA. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Citation

Karlov VA, Shklovskiy VM, Konovalov RN, and Zolovkina VS (2017) Structural and functional peculiarities of speech system organization in right-handed subjects revealed using functional magnetic resonance imaging. The co-dominant right hemisphere Brain Nerves 1: DOI: 10.15761/JBN.1000117

Corresponding author

Vladimir Karlov

Evdokimov Moscow State University of Medicine and Dentistry, Moscow, Russia; Tel: +7-916-149-29-42

E-mail : bhuvaneswari.bibleraaj@uhsm.nhs.uk

Figure 1. Activation of the Wernicke's speech (Brodmann area 22) in the right hemisphere in the first paradigm directed to identification of the intonational characteristics of speech

Figure 2. The same. 3D image of the lateral surface of the brain

Figure 3. Predominantly left-sided activation of the Broadmann area 18 when performing: A-second paradigm, B-third paradigm

Figure 4. Activation when performing the second paradigm: A-the accessory motor area and B-the Broadmann area 6 in the right hemisphere

Figure 5. Right-sided activation of the dorsolateral prefrontal cortex tertiary region of the Broadmann area 9 when performing the third paradigm

Figure 6.  Activation of the Broadmann area 46 during melody recognition

Figure 7. Activation of brainstem structures

Figure 8. Activation of the thalamus

Table 1. Activation areas from each paradigm (p<0,001)

Localisation of activation zones

Brodmann area

Coordinates (MNI)

x

Y

z

First paradigm (to determination of intonational characteristics)

Right Cerebrum, superior temporal gyrus, Wernicke's area

22

62

-18

0

Right Cerebrum, Parietal Lobe, Sub-Gyral, Gray Matter

 -//-

28

 

-58

38

46

-36

54

Left Cerebellum, Cerebellum Anterior Lobe, Culmen, Cerebellum _6

-//-

-30

-58

-28

-20

-56

-24

Second paradigm (to determination of the given accented and unaccented rhythms)

Left Cerebrum, Occipital Lobe

18

-28

-98

-2

Right Cerebrum, Frontal Lobe, Middle Frontal Gyrus

9

42

14

34

Right Cerebrum, Frontal Lobe, Middle Frontal Gyrus

6

42

4

58

Right Cerebrum, Frontal Lobe, Inferior Frontal Gyrus

-//-

44

26

-2

Left Cerebellum, Cerebellum Posterior Lobe, Cerebellum _6

-//-

-28

-62

-28

Right Cerebrum, Frontal Lobe, Medial Frontal Gyrus, Superior motor area

-//-

4

16

48

Right Cerebrum, Frontal Lobe, Superior Frontal Gyrus, Superior motor area

-//-

6

24

58

Left Cerebrum, Frontal Lobe, Medial Frontal Gyrus, Superior motor area

-//-

-4

10

50

Left Cerebrum, Sub-Lobar, Thalamus, Ventral Posterior Lateral Nucleus

-//-

-16

-18

6

Left Cerebrum, Sub-Lobar, Extra-Nuclear

-//-

-18 -

 

2

14

Third paradigm (to recognize a melody)

Left Cerebrum, Occipital Lobe

18

38

-94

-6

Мозжечок LB/ Uvula/ Vermis 8

-//-

0

-66

-36

Right Cerebrum, Occipital Lobe, Fusiform Gyrus

-//-

42

-70

-20

Left Cerebrum, Frontal Lobe, Middle Frontal Gyrus

46

-48

20

28

Left Cerebrum, Sub-Lobar, Insula

-//-

-34

20

2

Left Cerebrum, Parietal Lobe, Superior Parietal Lobe

7

-22

-66

44

Right Cerebrum, Frontal Lobe, Medial Frontal Gyrus, Supp Motor_Area

-//-

8

20

48

Left Cerebrum, Frontal Lobe, Medial Frontal Gyrus, Supp Motor_Area

-//-

-6

10

52

Right Cerebrum, Frontal Lobe, Superior Frontal Gyrus, Supp Motor_Area

-//-

6

12

60

Right Cerebrum, Midbrain

-//-

8

-22

-16

Right Cerebrum, Pons

-//-

6

-34

-28

Right Cerebrum, Parietal Lobe, Sub-Gyral

-//-

26

-58

38

Right Cerebrum, Parietal Lobe, Superior Parietal Lobule

-//-

28

-60

56

Right Cerebrum, Parietal Lobe, Inferior Parietal Lobule

40

40

-48

40