Take a look at the Recent articles

Basic principles of visual functions: mathematical formalism of geometries of shape and space, and the architecture of visual systems

Shigeko Takahashi

Psychology Laboratory, Kyoto City University of Arts, Ohe-Kutsukake-cho, 13-6, Nishikyo-ku, Kyoto 601-1197, Japan

E-mail : sgtak@kcua.ac.jp

Yoshimichi Ejima

Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, Japan

DOI: 10.15761/JSIN.1000119

Article
Article Info
Author Info
Figures & Data

Abstract

There is growing evidence for homologous mechanisms of recognition/perception and navigation in many species, from insects to humans. This leads to the notion that the core systems of recognition and navigation are shared across species and that the visual environment during motion and/or navigation molds the spatiotemporal properties of the nervous systems across widely separated phyla according to basic common principles. In this study, we propose a mathematical formalism for two distinct geometries of shape and space in the visual images on the retina. The formalism enunciates the relevance of the architecture of the visual system for processing the two geometries and for producing some sort of circulating memory in space-time, i.e., recognition of allocentric space.

Key words

 Visual functions, shape perception, spatial memory, geometry

Introduction

Throughout evolutionary history, visual sensory systems have relied on electromagnetic energy (i.e., light) from the sun or other celestial sources, and a crucial feature of eyes subserving visual functions is their imaging function: eyes with imaging capacities enable the visual neural system to extract spatial information by analyzing patterns of energy that are generated by or from objects in the environment. Conversely, “eyes” lacking any image-forming apparatus (optical components for imaging) have subserved non-visual functions, such as a circadian and/or shadow-detecting function [1].

A universal feature of visual functioning at early stage is the topological, spatial mapping of a peripheral receptor surface, on which images are formed, onto the corresponding central neural processors. In primates including humans, it has been established that the retinotopic mapping of the visual field to the surface of the striate cortex (V1) is characterized as a (logarithmic) conformal mapping [2-4]. Furthermore, there is a general principle in visual processing that the geometries of shape and space are processed by different neural pathways, referred to as the “what” (ventral) pathway and the “where” (dorsal) pathway, respectively, in the primate brain [5-7]. To gain insight into these functional aspects, it is helpful to enunciate the relevance of the anatomy and physiology of the visual system to treating geometries existing in the retinal images.

Recently, Spelke and Lee [8] have proposed a hypothesis of two core systems of geometry that humans share with other animals: the core navigation system and the core form analysis system. The core navigation system processes information about large-scale layouts, guiding navigation. The core form analysis system processes information about small-scale objects and forms, guiding form/shape analysis. The authors provided empirical evidence for their hypothesis, by showing that animals from insects to humans recognize objects primarily on the basis of their shapes, regardless of task demands [9,10], and that navigation in animals across species depends on distinct representations of the large-scale layout and the small-scale landmarks that interact to influence behavior [11, 12]. They emphasized the importance of the behaviors of animals and young children for insight into the core cognitive capacities: adult human intuition is a poor source of insights into such core systems, because the internal functioning of these core systems depends on principles and processes that are distinctly non-intuitive.

Traditionally, perception was considered to be a detached distal connection between the perceiver and the perceived, and the concept of intentionality implied a teleological link between an actual situation and an intended future condition. Thus, in the visual neurosciences, experiments have been designed to probe the physical events that evoke conscious experiences of perception, by consulting adult human intuitions. These approaches left scientists with a puzzle, i.e., resolving the non-intuitive principles and/or processes underlying our perception. Any scientific discipline can/must be evaluated along the two dimensions of degree of mathematical expression and amount of empirical support. Thus, a primary obstacle to mathematization of visual neurosciences is the selection of a suitable conceptual basis for the mathematical formalism; one that makes the formalism necessary (or feasible) rather than merely convenient (or optimal).

This article will explore the question of what mathematical formalism is best motivated by the fundamental issues of the architecture of the visual system shared universally across species. In particular, our overview focuses on the historical sources and development of theoretical attempts to address the geometries of shape and space, by shedding light on the computational problems that the visual system evolved to solve.

Mathematical formalism

First, we consider the difference between shape and space in geometry. The most important idea relevant to our discussion is the antithesis between ordinary (elementary) and projective geometry in Flex Klein’s Erlangen Program [13]. According to Klein’s idea, transformations are divided into two groups. One  group of transformations is designated as the principal group of transformations, by which the geometric properties of a configuration (figure/shape) in space remain entirely unchanged and are independent of the position occupied in space, by the configuration of its absolute magnitude, by any motions of space, by transformation into similar configurations, by transformation into symmetrical configurations with regard to a plane (reflection), as well as by any combination of these transformations:

“For, if we regard space as immovable, etc., as a rigid manifoldness, then every figure has an individual character; of all the properties possessed by it as an individual, only the properly geometric ones are preserved in the transformations of the principal group” [13].

There are transformations that do not belong to the principal group, referred to as projection. Projection geometry arose to enunciate the properties transferred in the process of projection in such a way as to put in evidence their independence of the change due to the projection. The group of all the projective transformations is distinguished from the principal group of transformations by its signification, but each group is of equal importance.

This argument comprehends the different methods of treating geometries of 2-dimensional images provided by objects (shapes) and by environmental space that apply to the visual system. For shapes of objects, properly geometric characteristics are preserved under the transformations of the principal group: two shapes are defined as the same only when one shape is gotten from the other by translation, scaling, etc. Therefore, the representation of shapes must be based on the invariants relating to the principal group of transformations. In contrast, for projected images of the environmental space, properly geometric characteristics are preserved under the projection transformations. Therefore, the representation of space must be based on the invariants relating to projective transformations. The difference between these two geometries requires that the visual system takes different processing strategies for them.

Geometry of shape

Hoffman [14] proposed a theory of visual shapes, i.e., an intrinsic two-dimensional geometry carried by the visual manifold, forming a representation of the field of view within the visual system. His theory consisted of a pattern representation in terms of tangent vectors, definitions of visual shapes by establishing invariances of the integrated vector fields under the Lie transformation group and combining operations between Lie operators. He proposed an “annulling action” of the invariance operation as the process of “perception”, that is, patterns are operated upon by neurons until they become invariant in a complex way via prolongations of Lie operators.  

More recently, Sharon and Mumford [15] proposed a method for the representation of 2D shapes based on constructions from the theory of conformal mapping. In the metric space, coming from using conformal mappings of 2D shapes into each other, every simple closed curve ( a “shape” ) in the plane is represented by a “fingerprint”, which is a diffeomorphism of the unit circle to itself (a differentiable and invertible periodic function). Every shape defines a unique equivalence class of such diffeomorphisms up to right multiplication by a Möbius map: two shapes define the same diffeomorphism only when one shape is gotten from the other by a translation and scaling. Thus, the fingerprint encodes the invariants relating to the principal group of transformations in Klein’s terms. The fingerprint encodes information about the domain in the derivative, which is shown to be influenced by two factors: the boundary curvature near a point of interest, and the distance between a base point in the interior of the shape and the boundary point of interest. These studies indicate that the mathematical operations of the principal group of transformations are crucial for the extraction of information about a shape, and that the boundary is a determinant feature of the shape.

Geometry of space

In Klein’s terms, the geometry of projection, which is invariant under the projective transformations, is distinguished from the geometry of shape. In addition, our sense of space/place is brought about by analyzing the surrounding layouts that are mapped serially on the retina through navigation. In everyday life, we respect the intuition that time and space have independent existences. However, time is an ordering device used to make sense of the perceived world and subjects of our perception are always places (space) and times connected: no one has observed a place except at a particular time, or a time except at a particular place. In this view, Minkowski [16] focused attention on how mathematics structures our understanding of the physical world and arrived at concepts about time and space by purely mathematical consideration. Referring to a space-time diagram, i.e., the world-line structure of space-time, Minkowski defined a “space-time line” to be the totality of space-time points corresponding to any particular point of matter for all time t. Minkowski also accomplished a most important work in number theory, a geometry of number “Geometrie der Zahlen” [17], where Minkowski introduced the notion of numerical grids or lattices (Zahlengitter) that were meant as a geometrical representation of arithmetic relations. Gauthier [18] pointed out the inner mathematical connection of Minkowski’s space-time formulation with his geometry of numbers: the space-time diagram is an illustration in physical geometry of a central scheme in the geometry of numbers. On the basis of the assumption that motion can only be represented by the picture of a moving vector on a continuous line, space-time diagrams can be drawn to picture motion in a physical geometry just as grids are used to cover the content of a surface in a geometry of numbers.

In view of Minkowski’s pronouncements on the space-time diagram, one would be tempted to question the traditional concept of representation of space in the field of neurosciences. If the material content of the physical world is constrained by that structure, time and space would not be experienced or perceived independently to us. One can argue that a sequence of retinal images and motions may conjugately contribute to our perception of space.

Another important contention is the parallel between covering a surface with numerical grids and filling up a two-dimensional space with diagrams. In mathematics, it has been proved that conformal mappings can be approximated by circle packing isomorphisms [19]: roughly speaking, a bounded region is almost filled by Ɛ–circles from the regular hexagonal Ɛ–circle packing of the plane [20], and this triangulation is useful for grid generation [21].

This line of argument reminds us the existence of grid cells in the entorhinal cortex of the vertebrate medial temporal lobe (MTL), whose spatial firing fields can be thought of as representing the nodes of a triangular grid: the spatial auto-correlation of the firing-rate map has a well-defined hexagon surrounding a central peak [22-24]. Moreover, the concept of the space-time diagram may enunciate a deep computational connection between spatial and temporal coding in the hippocampus, suggested by Howard and Eichenbaum [25], and revealed as the existence of “time cells” by MacDonald et al., [26].

The mathematical formalism mentioned above has an essential mathematical meaning as the representational means for the structure of the physical world whose images are analyzed by the visual system. Therefore, the formalism provides a basic framework for the neural implementation of visual functions that are shared across species. In the following sections, we show how well visual functions can be understood in terms of the basic framework, by examining the invertebrate (honeybee) and vertebrate (human) visual function.

Section 2: Honeybees

Among invertebrates, it is well-known that honeybees exhibit complex social, navigational and communication behavior, as well as a rich cognitive repertoire including object/shape recognition and spatial memory. Since these complex behaviors are controlled by a small brain consisting of only 1 million or so neurons, the honeybees have offered an opportunity to study the relationships between the behaviors and their underlying mechanisms that are limited in size and complexity.

 The nervous system, the neural organization of the honeybee’s visual system, and the difference of insect eyes from vertebrate or human eyes have been well-documented in the literature [27-29]. We thus focus on what strategies honeybees have evolved for dealing with the problems of object/shape recognition, visually guided navigation, and spatial memory. The honeybee’s immobile eyes with fixed-focus optics lack eye movements, so their strategies must rely on image motion generated by the insect’s own body motion to achieve their rich cognitive behaviors. This helps us to understand more easily the tight linkage between geometries of retinal images and self-motion/eye movements in the visual processing of object/shape and space, which humans cannot (at least) consciously sense.

Shape recognition

In comparative studies on cognitive behaviors, the occurrence of associative-learning and generalization is used to demonstrate the existence of cognitive capacities in insects comparable to those in humans [30]. Lehrer and Campan [31] showed that honeybees discriminated between pairs of novel shapes and that they generalized the shapes among different types of contrast: changes in color or pattern of the shape area did not affect the discrimination performance, suggesting that the appearance of the shape area was not crucial; in addition, discrimination did not deteriorate when the shapes were represented only by the outlines or portions of outlines, indicating that the honeybees recognized the outlines, rather than the area, of the shapes. Based on these findings, they concluded that the cue used in the discrimination of shapes was located at the boundary of the shapes, and that the generalization among different types of contrast/color could be explained by neither feature extraction theory nor image-matching theory.

Several studies have shown that honeybees use self-induced image motion in a variety of visual tasks, particularly when both shapes and their backgrounds are patterned [32, 33]. In honeybees, following of contours is under the control of the green-sensitive receptors that project to the movement detection system [34]. Thus, the image-motion induced by self-motion may serve to cope with the tasks of shape recognition on the basis of geometry of shape, i.e. invariants relating to the principal group of transformations.

Navigation

Srinivasan [28] demonstrated that flying honeybees display surprisingly competent mechanisms of navigation, by describing three illustrative examples in the context of navigation to a destination.

 The first example is the honeybee’s flight, negotiating narrow gaps and avoiding obstacles. Kirchner and Srinivasan [35] used a tunnel with walls, each side of which carried a pattern consisting of a vertical black-and-white grating, and demonstrated that during flying, the honeybees were balancing the speeds of the retinal images on their two eyes independent of the contrast frequencies: a lower image speed on one eye caused the honeybee to move closer to the wall seen by that eye,; a higher image speed, on the other hand, had the opposite effect. This behavioral pattern was not influenced by the luminance profiles of the gratings (square- or sinusoidal-wave), and was independent of the contrasts of the grating on the two sides. They concluded that the honeybee’s visual system during flight was capable of measuring the image velocities in the two eyes robustly and independently, and of using this information to steer a collision-free path through the gap.

 The second example is the insects’ ability to control their speed of flight. This is the finding by David [36], who observed fruit flies flying upstream along the axis of a wind tunnel, the walls of which were decorated with a helical black-and-white striped pattern so that rotation of the cylinder wind tunnel about its axis produced apparent movement of the pattern towards the front or the back. The result revealed that the fruit flies regulated their flight speed so as to hold a constant angular velocity of the image on the eye, irrespective of the spatial structure of the image. This strategy is a great advantage in that the insect would automatically slow down to a safer speed when negotiating a narrow passage.

Srinivasan [28] tested the feasibility of these two strategies by implementation in robots. The performance of these robots showed that the robots reliably followed the axis of a corridor, irrespective of whether the corridor was straight or curved and that the robots automatically slowed down their speed when the corridor narrowed.

The third example is the honeybee’s performance of smooth landings. In studies of human landing behavior in aircraft, as well as walking or running, the optic-flow cues have been highlighted since a seminal work of Gibson [37]. However, Srinivasan [28] has noticed that when an insect makes a grazing landing on a flat surface, the optic-flow-cues (cues derived from image expansion) are relatively weak because the dominant pattern of image motion is a translatory flow in the front-to-back direction. By video-filming trajectories, Srinivasan et al. [38] analyzed landing trajectories and revealed that the horizontal speed was roughly proportional to the height. They proposed that in guiding the landing, by holding the image velocity constant, the horizontal speed was regulated to be proportional to height above the ground, so when the honeybee finally touched down, its horizontal speed was zero, thus ensuring a smooth landing.

These three examples reveal how flying insects use computationally simple visual guidance strategies to negotiate narrow gaps, avoid obstacles, regulate flight speed and orchestrate smooth landings. During navigation, neither stereoscopic methods of measuring the distance of the surface nor complicated computing of optic-flow cues is required for the flying insects. The strategies are far more amenable to real-time implementation than methods that use stereoscopic vision to calculate the distances and to compute optic flow by using the image interpolation.

Spatial memory

Insects are traditionally used as models for the study of elemental forms of associative learning. Since the expression of learned behavior in honeybees depends on context, such as stimulus characteristics, time of day, location and social condition, it is considered that honeybees store the what, where and when of memory terms as an integrated memory [27]. Furthermore, honeybees have an ability to communicate about remote spatial locations, known as the waggle dance [39]. Honeybees thus demonstrate cognitive processes, including extraction of spatial relations and decision-making on the representation of the environmental world that is conceptualized as a cognitive map or allocentric space [27].

Studies of honeybees provide a clear demonstration that despite their simple nervous systems, honeybees can display surprisingly competent mechanisms of shape recognition, visual guidance for navigation and spatial memory. The strategies employed by honeybees reflect the basic principles of processing of geometries of shape and space, providing a clue for insight into how visual information is exploited to perceive objects’ shapes, navigate, and recognize allocentric space in humans.

Section 3: Humans

Shape perception

 As mentioned in Section 1, shapes are geometrically defined by two terms:

  1. For every figure (shape), properly geometric characters are preserved under the transformations of the principal group.
  2. After conformal mapping, every shape is represented by a “fingerprint”, which is a diffeomorphism of the unit circle to itself and influenced by the boundary curvature.

Proposition (1) means that in order to recognize an object’s shape in the environment, the visual system must exploit invariances under the transformations of the principal group. For such visual exploitation, the operations of the principal group of transformation are implemented by eye movements, as seen in honeybees. Indeed, nearly all animals with good visual functions have a repertoire of eye movements, and these movements are made by eyes themselves, or the head, or in some insects the whole body [40]. In the 1950s, studies of humans found that stationary objects vanished perceptually in the absence of so-called fixational eye movements [41-43]. Note here two interesting clinical cases. In one case, the patient A.I. has never made eye movements because of extraocular fibrosis, but her visual perception was surprisingly normal. The strategy that A.I. used was to move her head in a ‘saccadic’ fashion (with both voluntary and automatic saccades) and the saccadic movements of A.I’s head closely resembled the saccadic eye movements of normal subjects [44-46]. In the second case, children with cerebral palsy show microsaccadic impairment, which compounds their learning difficulties in reading skill [47].

Proposition (2) means that recognizing an object’s shape may rely on the neural representation reflecting the behavior of the boundary derivatives. Elder and Velisavljevic [48] found an empirical behavioral evidence for this. They used a dataset of images in which luminance, color, texture and shape (boundary) cues were selectively turned on or off, and measured object detection performance in human subjects. The result showed that humans did not use simple luminance or color cues for object detection but instead relied on shape (boundary) and texture cues: the boundary cue was the first available, influencing performance for stimulus durations as short as 10 msec within a backward masking paradigm. At the neural level, Pasupathy and Connor [49, 50] provided an empirical evidence for this possibility, by investigating neural responses in area V4, which is an intermediate stage in the ventral (what) pathway and provides the major input to final stages in the inferotemporal cortex. They found that in the Macaque area V4, many neurons responded to boundary curvature and angular position. They incorporated their single neuron data into a population coding model of boundary curvature. The resulting population responses indicated that all the salient boundary features were represented in the population responses and were reproduced in the reconstruction.

These arguments suggest the existence of two stages of processing in the human perception/recognition of shapes. The earlier stage may involve the processing of extraction of invariants relating to the principal group of transformations through eye movements (saccades, micro-saccades, etc.). The extracted invariants may be preserved in a topological pattern of neural activities encoding the boundary, along which differential operations enable the generation of a neural representation of the shape in the later stage. In this scenario, the earlier stage detects geometric features that are unique to objects, whereas the later stage abstracts geometric information about the objects, allowing it to be represented as conceptual information about the meaning of the stimulus. This explains why we are ordinarily and typically unaware of our eye movements, such as saccades/micro-saccades, except for voluntary eye movements (attentional gaze shift), whereas our eyes always move to achieve visual functions [51, 52].

Given the representational means for shapes at the neural level, regularity and geometry of orientation columns in the primate V1 is feasible for the conjugated connection between the extraction of boundary information and eye movements. Although the existence of the orientation columns in V1 was established in the 1960s, little is known about the roles of such visual processing architectures in the generation of the higher order receptive field properties. The distribution of receptive-field orientations of cells within the tissue of the striate cortical area follows two remarkable rules [53]: first, neurons with cell bodies aligned in a vertical direction within one narrow column of cortex tend to respond optimally to lines with one and the same orientation; second, neurons in different columns generally respond to different orientations, and are so arranged that continuous movement through the cortex corresponds, barring singularities, to a continuous rotation of the corresponding orientation. When the eye moves along an outline or boundary of an object, the projected image of the object translates and/or rotates on the retina, which is conformally mapped to V1. Given such images (continuously moving through the cortex), the architecture of the orientation columns may offer a great advantage for preserving invariants relating to the principal group of transformations as a topological and robust pattern of population activity within the orientation columns, where pattern structures represented by directional cells are operated until they become invariant in a complex way via eye movements. Thus, humans share with honeybees the basic principle of neural processing of the geometry of shapes.

Navigation and recognition of allocentric space

The mathematical formalism about the geometry of space indicates that a sequence of images projected on the retina through navigation may structure the representation of the physical space as space-time diagrams. In order to navigate, it is desirable for the organism to locomote/move automatically and safely and thus subconsciously. For this purpose, the strategies used by honeybees are ideal, and it may be likely that humans also use similar strategies during locomotion such as walking and running. In honeybees, these strategies are based on the image velocities in the two eyes measured independently. In humans, as well as many other vertebrate species, the left-hemi field and the right-hemi field of the visual field are projected onto visual areas of the right and left hemisphere, respectively. When walking forward, the optic flow induced by the self-motion of walking is projected on V1 where, because of conformal mapping, the translational and rotational components of the optic flow are orthogonally represented in the topological manner. Moreover, the vestibularly driven reflexes, i.e., vestibulo-ocular reflex (VOR), vestibulo-spinal reflex (VSR) and vestibulo-collic reflex (VCR), elicit eye/head movements so that images remain stable on the fovea, as long as the subjects look in the direction of their heading [54]. These architectures allow measurement of image velocities in the two eyes independently and robustly. Using the estimated image velocities in two eyes to steer a collision-free path and regulate locomotion speed may be feasible for strategies used during human locomotion, because complex neural computation is not required.

Clifford and Ibbotson [55] have examined the motion processing in vertebrate and insect visual systems, and pointed out that the lobula plate (a neuropile) in the insect optic lobes functions in a similar fashion to the nuclei of the accessory optic system (AOS) and the optic tract (NOT) in mammals: in insects, the neurons in the lobula plate transfer information from the optic lobes into the midbrain and are involved in controlling optomotor responses [56, 57]; in vertebrates, the role of the NOT neurons is considered to summate the inputs from directional cells to generate selective responses to large field summation, and the NOT and AOS are connected to the motor system [58, 59]. It is suggested that the visual environment during head and eye movements molds the spatiotemporal properties of the neurons across widely separated phyla [55]. Humans cannot, at least consciously, sense how and what visual information is exploited to guide their walking/running (indeed, one can walk automatically while he/she is looking at the display of his/her smart-phone), and therefore no researcher has a intuitive feeling for what constitutes ideal or even adequate visual stimulus/information in examinations on human navigation. If locomotion can be processed automatically and safely, then the visual system, together with the memory system, could implement in parallel more complicated processing of the allocentric representation of space.

2021 Copyright OAT. All rights reserv

Allocentric euclidian space

Our perception/recognition of space relies on projective geometry, as mentioned in Section 1. Projective geometry contains three typical 2-dimensional geometries: the spherical, Euclidean, and hyperbolic geometries, which provide a set of canonical Riemannian manifolds of constant sectional curvatures, 1, 0, -1, respectively, invariant under the action of projective transformation groups. Although adult humans have an intuition of the Euclidean properties of large-scale layouts of the environment, evidence that navigation depends on non-Euclidean representations comes from not only studies of insects [60] but also from research on human navigation in immersive virtual environments [61]. What mechanisms underlie this?

Several experiments show that human subjects accurately detect the effects of gravity on a target motion but are poorly sensitive to arbitrary accelerations violating gravity, and that the visual reference frame for up and down is anchored to the physical gravitational vertical, as sensed by the vestibular system [62]. The vestibular system is able to continuously estimate the gravity vector by combining signals from the otolith and semicircular canals [63, 64]. Because of the lack of a “primary” vestibular cortex, humans cannot consciously perceive the gravity vector per se, but the vestibular information of the gravity vector influences many cortical area including the insula cortex (contributing to recognition of body-image), MTL (contributing to memory) and visual cortex [65].

A particularly interesting fact is that the vestibular signals strongly influence the hippocampal system that underlies navigation. Accurate navigation depends on knowledge of one’s current spatial position and direction. These representations must utilize an allocentric reference, i.e., a world-based frame of reference. Two fundamentally distinct systems are thought to process these representations. One system entails a representation of the subject’s location by the firing of place cells (in rodents) or spatial-view cells (in primates) in the hippocampus or adjacent areas [66]. The second system involves cells referred to as head direction (HD) cells, which encode the subject’s perceived directional heading [67]. HD cells have been identified in monkeys [68], and are thought to underlie our sense of direction [69]. The importance of the vestibular system in the HD system is well established [65]. The HD cells coexist with grid cells and border cells in the parahippocampal, medial entorhinal, pre- and para-subicular cortical areas and are found throughout the limbic system [70, 71]. As mentioned in Section 1, the response properties of grid cells are mathematically relevant to the representational means for the spatial structure of the conformal mapping of a bounded region. Boundary cells are considered to define the boundary of a bounded region [72]. The coexistence of HD cells with grid cells and border cells indicates that the representation of projective geometry of external space by the grid cells may be anchored to the gravitational vector from the vestibular signals, that is, a line or a plane with constant curvature 0, yielding Euclidean geometry at least locally and temporally. Because the vestibular signal of the gravitational vector is continuously sent to the HD system, the overall structure of space represented by the grid cells may be Euclidean, providing a productive system of Euclidean geometry in the brain. Therefore, the vestibular system has a particularly important role in the neural representation of environmental space with a Euclidean world-based frame of reference.

Another important fact is that the vestibular system influences the limbic system structure through the theta rhythm, which is most apparent during navigation and is thought to be important for processes including: 1) encoding and/or retrieval of mnemonic, 2) timing the interaction of information between prefrontal and hippocampal areas, and 3) behaviors that involve the use of spatial information [65]. The importance of the theta rhythm in the processing of hippocampal information has been demonstrated at a number of levels, including a phenomenon referred to as theta precession [73] and the direct projection of the septal area, which has a key role in the origin of theta rhythm, to the hippocampus [74]. Since vestibular system activation clearly influences the theta rhythm and the cell responses in the hippocampus, there is a deep computational connection between spatial and temporal coding in the hippocampus. This may serve the neural representations of the space-time diagram resulting from navigation.

Conflict of interest

We have no financial interest, or no conflict of interest.

References

  1. Lamb TD, Collin SP, Pugh EN Jr (2007) Evolution of the vertebrate eye: opsins, photoreceptors, retina and eye cup. Nat Rev Neurosci 8: 960-976. [Crossref]
  2. Schwartz EL (1977) Spatial mapping in the primate sensory projection: analytic structure and relevance to perception. Biol Cybern 25: 181-194. [Crossref]
  3. Schwartz EL (1980) Computational anatomy and functional architecture of striate cortex: a spatial mapping approach to perceptual coding. Vision Res 20: 645-669. [Crossref]
  4. Tootell RB, Switkes E, Silverman MS, Hamilton SL (1988) Functional anatomy of macaque striate cortex. II. Retinotopic organization. J Neurosci 8: 1531-1568. [Crossref]
  5. 5 Mishkin M, Ungerleider LG (1982) Contribution of striate inputs to the visuospatial functions of parieto-preoccipital cortex in monkeys. Behavioral Brain Res 6: 57-77. [Crossref]
  6. Van Essen DC, Felleman DJ, DeYoe EA, Olavarria J, Knierim J (1990) Modular and hierarchical organization of extrastriate visual cortex in the macaque monkey. Cold Spring Harb Symp Quant Biol 55: 679-696. [Crossref]
  7. Eichenbaum H, Yonelinas A, Ranganath C (2007) The medial temporal lobe and recognition memory. Ann Rev Neurosci 30: 123-152. [Crossref]
  8. Spelke ES, Lee SA (2012) Core systems of geometry in animal minds. Philos Trans R Soc Lond B Biol Sci 367: 2784-2793. [Crossref]
  9. Slater A, Mattock A, Brown E, Bremner JG (1991) Form perception at birth: Cohen and Younger (1984) revisited. J Exp Child Psychol 51: 395-406. [Crossref]
  10. Lehrer M, Campan R (2004) Shape discrimination by wasps (Paravespula germanica) at the food source: generalization among various types of contrast. J Comp Physiol A 190: 651-663. [Crossref]
  11. Ekstrom AD, Kahana MJ, Caplan JB, Fields TA, Isham EA, et al. (2003) Cellular networks underlying human spatial navigation. Nature 425: 184-188. [Crossref]
  12. Doeller CF, Barry C, Burgess N (2010) Evidence for grid cells in a human memory network. Nature 463: 657-661. [Crossref]
  13. Klein F (1872) A comparative review of recent researches in geometry (Programme on entering philosophical faculty and the senate of the university of Erlangen in 1872). Translated by Haskell, M.H., New York Mat. Soc. , 81892-1893), 215-249.
  14. Hoffman W C (1966) The Lie algebra of visual perception. J Math Psychol 3: 65-98.
  15. Sharon E, Mumford D (2006) 2D-shape analysis using conformal mapping. Int J Computer Vision 70: 55-75.
  16. Minkowski H (1909) Raum und Zeit, Physikakische Zeitschrift 10: 104-111.
  17. Minkowski H (1910) Geometrie der Zahlen, Leipzig and Berlin: R.G. Teubner, JFM 41.0239.03, MR 0249269.
  18. Gauthier Y (2009) Herman Minkowsli: from geometry of numbers to physical geometry. in Minkowski spacetime: a hundred years later, (Fundamental theories of physics, 165), ed. V. Petkov, (Berlin: Springer). 247-258.,
  19. Rodin B, Sullivan D (1987) The convergence of circle packings to the Riemann mapping. J Differential Geometry 26: 349-370.
  20. Rodin B (1987) Schwarz’s lemma for circle packings. Invent Math 89: 271-289.
  21. Carter I, Rodin B (1992) An inverse problem for circle packing and conformal mapping. Transactions of the American Mathematical Society 334: 861-875.
  22. Killian NJ, Jutras MJ, Buffalo EA (2012) A map of visual space in the primate entorhinal cortex. Nature 491: 761-764. [Crossref]
  23. Jacobs J, Weidemann CT, Miller JF, Solway A, Burke JF, et al. (2013) Direct recordings of grid-like neuronal activity in human spatial navigation. Nature Neurosci 9:1188-1190. [Crossref]
  24. Moser EI, Roudi Y, Witter MP, Kentros C, Bonhoeffer T3, et al. (2014) Grid cells and cortical representation. Nat Rev Neurosci 15: 466-481. [Crossref]
  25. Howard MW, Eichenbaum H (2015) Time and space in the hippocampus. Brain Res 1621: 345-354. [Crossref]
  26. MacDonald CJ, Lepage KQ, Eden UT, Eichenbaum H (2011) Hippocampal "time cells" bridge the gap in memory for discontiguous events. Neuron 71: 737-749. [Crossref]
  27. Menzel R (2012) The honeybee as a model for understanding the basis of cognition. Nat Rev Neurosci 13: 758-768. [Crossref]
  28. Srinivasan MV (2006) Small brains, smart computations: vision and navigation in honeybees, and applications to robotics. International Congress Series 1291: 30-37.
  29. Srinivasan MV (2011) Honeybees as a model for the study of visually guided flight, navigation, and biologically inspired robotics. Physiol Rev 91: 413-460. [Crossref]
  30. Menzel R, Giurfa M (2001) Cognitive architecture of a mini-brain: the honeybee. Trends Cogn Sci 5: 62-71. [Crossref]
  31. Lehrer M, Campan R (2005) Generalization of convex shapes by bees: what are shapes made of? J Exp Biol 208: 3233-3247. [Crossref]
  32. Srinivasan MV, Lehrer M, Horridge GA (1990) Visual figure ground discrimination in the honeybee: the role of motion parallax at boundaries. Proc R Soc Lond B Biol Sci 238: 331-350.
  33. Lehrer M (1994) Spatial vision in the honeybee: the use of different cues in different tasks. Vision Res 34: 2363-2385. [Crossref]
  34. Lehrer M, Wehner R, Srinivasan M (1985) Visual scanning behaviour in honeybees. J Comp Physiol A 157: 405-415. [Crossref]
  35. Kirchner WH, Srinivasan MV (1989) Freely flying honeybees use image motion to estimate object distance. Naturwissenschaften 76: 281-282.
  36. 36 David CT (1982) Compensation for height in the control of groundspeed by Drosophila in a new “Barber’s pole” wind tunnel. J Com Physiol 147: 485-493.
  37. Gibson JJ (1950) The perception of the visual world. Boston: Houghton Mifflin.
  38. Srinivasan MV, Zhang SW, Chahl JS, Barth E, Venkatesh S (2000) How honeybees make grazing landings on flat surfaces. Biol Cybern 83: 171-183. [Crossref]
  39. Von Frisch K (1967) The dance language and orientation of bees. The Belknap Press of Harvard Univ. Press.
  40. Land MF (1999) Motion and vision: why animals move their eyes. J Comp Physiol A 185: 341-352. [Crossref]
  41. 41 Yabus AL (1967) Eye movements and vision. New York: Plenum.
  42. DITCHBURN RW, GINSBORG BL (1952) Vision with a stabilized retinal image. Nature 170: 36-37. [Crossref]
  43. 43 Riggs LA, Ratliff F (1952) The effects of counteracting the normal movements of the eye. J Opt Soc Am 42: 872-873.
  44. Gilchrist ID, Brown V, Findlay JM (1997) Saccades without eye movements. Nature 390: 130-131. [Crossref]
  45. Gilchrist ID, Brown V, Findlay JM, Clarke MP (1998) Using the eye-movement system to control the head. Proc Biol Sci 265: 1831-1836. [Crossref]
  46. Land MF, Furneaux SM, Gilchrist ID (2002) The organization of visually mediated actions in a subject without eye movements. Neurocase 8: 80-87. [Crossref]
  47. Kozeis N, Anogeianaki A, Mitova DT, Anogianakis G, Mitov T, et al. (2006) Visual function and execution of microsaccades related to reading skills, in cerebral palsied children. Int J Neurosci 116: 1347-1358. [Crossref]
  48. Elder JH, Velisavljević L (2009) Cue dynamics underlying rapid detection of animals in natural scenes. J Vis 9: 7. [Crossref]
  49. Pasupathy A, Connor CE (1999) Responses to contour features in macaque area V4. J Neurophysiol 82: 2490-2502. [Crossref]
  50. Pasupathy A, Connor CE (2002) Population coding of shape in area V4. Nat Neurosci 5: 1332-1338. [Crossref]
  51. Martinez-Conde S, Macknik SL, Hubel DH (2004) The role of fixational eye movements in visual perception. Nat Rev Neurosci 5: 229-240. [Crossref]
  52. Martinez-Conde S, Otero-Millan J, Macknik SL (2013) The impact of microsaccades on vision: towards a unified theory of saccadic function. Nat Rev Neurosci 14: 83-96. [Crossref]
  53. Hubel DH, Wiesel TN (1974) Sequence regularity and geometry of orientation columns in the monkey striate cortex. J Comp Neurol 158: 267-293. [Crossref]
  54. Angelaki DE, Hess BJ (2005) Self-motion-induced eye movements: effects on visual acuity and navigation. Nat Rev Neurosci 6: 966-976. [Crossref]
  55. Clifford CW, Ibbotson MR (2002) Fundamental mechanisms of visual motion detection: models, cells and functions. Prog Neurobiol 68: 409-437. [Crossref]
  56. Hausen K (1993) Decoding of retinal image flow in insects. in Visual motion and its role in the stabilization of gaze, eds. F.A. Miles, J. Wallman (Amsterdam: Elsevier). 203-235.
  57. Douglass JK, Strausfeld NJ (2001) Pathways in dipteran insects for early visual motion processing. in Motion Vision: Computational, neural and ecological constraints. Eds. J.M., Zanker, J., Zeil (Berlin:Springer ) 66-81.
  58. Schiff D, Cohen B, Raphan T (1988) Nystagmus induced by stimulation of the nucleus of the optic tract in the monkey. Exp Brain Res 70: 1-14. [Crossref]
  59. Belknap DB, McCrea RA (1988) Anatomical connections of the prepositus and abducens nuclei in the squirrel monkey. J Comp Neurol 268: 13-28. [Crossref]
  60. Wehner R, Menzel R (1990) Do insects have cognitive maps? Annu Rev Neurosci 13: 403-414. [Crossref]
  61. 61 Rothman DB, Warren WH (2006) Wormholes in virtual reality and the geometry of cognitive maps. J Vision 6: 143.
  62. Lacquaniti F, Bosco G, Indovina I, La Scaleia B3, Maffei V3, et al. (2013) Visual gravitational motion and the vestibular system in humans. Front Integr Neurosci 7: 101. [Crossref]
  63. Angelaki DE, Cullen KE (2008) Vestibular system: the many facets of a multimodal sense. Annu Rev Neurosci 31: 125-150. [Crossref]
  64. Cullen KE (2012) The vestibular system: multimodal integration and encoding of self-motion for motor control. Trends Neurosci 35: 185-196. [Crossref]
  65. Shinder ME, Taube JS (2010) Differentiating ascending vestibular pathways to the cortex involved in spatial cognition. J Vestib Res 20: 3-23. [Crossref]
  66. Moser EI, Kropff E, Moser MB (2008) Place cells, grid cells, and the brain's spatial representation system. Annu Rev Neurosci 31: 69-89. [Crossref]
  67. Clark BJ, Taube JS (2012) Vestibular and attractor network basis of the head direction cell signal in subcortical circuits. Front Neural Circuits 6: 7. [Crossref]
  68. Robinson CJ, Burton H (1980) Organization of somatosensory receptive fields in cortical areas 7b, retroinsula, postauditory and granular insula of M. fascicularis. J Comp Neurol 192: 69-92. [Crossref]
  69. Taube JS, Muller RU, Ranck JB Jr (1990) Head-direction cells recorded from the postsubiculum in freely moving rats. I. Description and quantitative analysis. J Neurosci 10: 420-435. [Crossref]
  70. Sargolini F, Fyhn M, Hafting T, McNaughton BL, Witter MP, et al. (2006) Conjunctive representation of position, direction, and velocity in entorhinal cortex. Science 312: 758-762. [Crossref]
  71. Taube JS (2007) The head direction signal: origins and sensory-motor integration. Annu Rev Neurosci 30: 181-207. [Crossref]
  72. Stewart S, Jeewajee A, Wills TJ, Burgess N, Lever C (2013) Boundary coding in the rat subiculum. Philos Trans R Soc Lond B Biol Sci 369: 20120514. [Crossref]
  73. O'Keefe J, Recce ML (1993) Phase relationship between hippocampal place units and the EEG theta rhythm. Hippocampus 3: 317-330. [Crossref]
  74. Stewart M, Fox SE (1990) Do septal neurons pace the hippocampal theta rhythm? Trends Neurosci 13: 163-168. [Crossref]

Editorial Information

Editor-in-Chief

George Perry
The University of Texas at San Antonio

Article Type

Research Article

Publication history

Received: March 04, 2016
Accepted: March 21, 2016
Published: March 24, 2016

Copyright

©2016 Takahashi S. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Citation

Takahashi S, Ejima Y (2016). Basic principles of visual functions: mathematical formalism of geometries of shape and space, and the architecture of visual systems. J Syst Integr Neurosci 2: doi: 10.15761/JSIN.1000119

Corresponding author

Shigeko Takahash

Psychology Laboratory, Kyoto City, University of Arts, Ohe-Kutsukake-cho, 13-6, Nishikyo-ku, Kyoto 601-1197, Japan, Tel: +81-(0)75-334-2265

E-mail : sgtak@kcua.ac.jp

No Data.