Movement detection in three-dimensional space: the end product of multisensory integration
Movement detection and navigation is the end product of multimodal interaction, involving a number of senses, e.g., vision, equilibrium (vestibular sensors), tactile cues, and others, depending on the species.
The vertebrate labyrinth is one example of a motion detection system that monitors and represents movements in three-dimensional space. The sensors are the head-fixed semicircular canals (for detecting rotations), and the otoliths (for detecting translations and static tilts). The semicircular canals are oriented in three planes of physical space. The vertebrate labyrinth in its most elaborate expression possesses six semicircular canals, three on each side of the head, whose spatial arrangement (vertical canals placed diagonally in the head, horizontal canals placed earth-horizontally) follows three interconnected principles: 1) bilateral symmetry, 2) push-pull operational mode, and 3) mutual orthogonality of the canals
At the motor execution site of vestibular reflexes, in particular pertaining to the vestibulo-ocular reflex, we find a spatial arrangement of the extraocular muscles similar to the one exemplified in semicircular canal orientation. This organization occurs regardless of the orientation of the optic axis, i.e., whether an animal is lateral- eyed or frontal-eyed.
Since most vertebrates tend to locomote with their eyes open, the visual world will have to be represented centrally in a three-dimensional expression. Spatial aspects of visual-vestibular interaction related to compensatory eye movements have been examined in the accessory optic system, including the inferior olive, flocculus, and vestibular nuclei. This system of visually driven cells forms an "optokinetic coordinate system" whose coordinate axes are closely aligned with the axes of the semicircular canals and the extraocular muscles. This intrinsic geometry has now been confirmed for rotational, as well as translational (optic flow) visual stimuli.
With regard to visual-vestibular interaction, neurons in the vestibular nuclei typically exhibit a 'complementary' response, i.e., the visual on-direction is opposite to the vestibular on-direction, since the visual world "moves" on the retina in the opposite direction to a given head movement. In our studies on visual-vestibular interaction in parietal cortex neurons, namely in the ventral intraparietal area (VIP), we found exclusively 'non-complementary' responses, i.e., always co-directional with the vestibular on-direction. In essence, this response pattern is geared to generate a visual-vestibular sensory conflict situation. Similar response characteristics in other cortical regions are not found uniformly. Complementary and non-complementary visual-vestibular interaction exists in the co-called vestibular cortices, area 2v and PIVC (parieto-insular vestibular cortex), and in thalamic nuclei. Area 7a has been reported to contain neurons with exclusively non-complementary visual and vestibular on-directions in the roll plane.
Vestibular responses of VIP neurons were typically in phase with head velocity or even with jerk. A small number of neurons was coding only one signal quality, either to acceleration and velocity. The vast majority of cells carried two or more parameters including a possible position signal (head in space or head direction).
Movement-related sensory responses will have to be interpreted in specific behavioral contexts within the anatomical and neurobiological constraints imposed by the investigated biological organism.
The cortical analysis of motion signals on the two retinas is done in parallel and to some degree independently of other visual cues. One important task for the visual system is to use these signals for orientation and movement through a three dimensional scene. This is mostly achieved in extrastriate cortex containing a pathway, called the "dorsal stream", which leads to posterior parietal cortex. In the monkey this pathway contains several areas, starting with Area 17 (V1), Area 18 (V2 and V3), Area 19 (V4 and V5=MT) and projecting to the Medial Superior Temporal area (MSTd), the Ventral Intra-Parietal area (VIP) and parietal area 7a. Area MSTd is the putative area of heading perception (Duffy and Wurtz, 1991). Since MSTd projects to area VIP one might expect the latter area to be sensitive to optic flow stimulation, as was indeed found (Schaafsma and Duysens, 1996; Bremmer, Graf, Ben Hamed and Duhamel, personal communication).
Because of the biological importance of optic flow (obstacle avoidance; maintenance of heading direction) one might expect that even brief exposures to such stimuli are effective to elicit responses. On the other hand, optic flow patterns can be complex and therefore processing may be assumed to be time-consuming. Both in man and monkey, very little is known about the minimum duration needed to detect heading accurately. To check the lower limit of exposure time for the generation of direction selective responses, cells in area VIP were tested with visual stimuli of various durations. It was found that a 2 frame stimulus was already sufficient to induce direction selective responses but much longer stimuli were needed to develop full responses.
Retinal optic flow signals have to be combined with extraretinal signals about rotation of eye, head or trunk to allow the detection of heading during locomotion with fixation of targets away from the heading direction (see Graf and Bremmer, this volume). This induces smooth pursuit eye movements in combination with complex optic flow. In VIP there are cells which repond both to optic flow and to smooth pursuit. Other cells code purely for position in orbit (fixation related firing).
The data on awake monkey will be related to the question of how humans can maintain heading direction while they turn either the eyes, head or shoulder. Experiments using simulated motion predict that we should be able to accurately judge heading during active eye or head turns because we are able to use extra-retinal cues (Crowell et al., 1998). Is this also true for real locomotion and for shoulder turns? We investigated these questions by having subjects walk on a treadmill while they followed a horizontally and sinusoidally moving dot either with their eyes, head or shoulder. As predicted, the subjects showed very little lateral deviation when they performed the eye or head movement task. On average, head movements gave only slightly more deviation than eye movements. In contrast, shoulder movements led to a large increase in deviation. These data support the notion that humans are able to use retinal and extra-retinal signals to keep the direction of walking despite eye or head turns. However, this type of information is insufficient to accomplish accurate heading with turned shoulders, at least in untrained subjects.
References:
Duffy, CJ & Wurtz, RH: Sensitivity of MST neurons to optic flow stimuli. A continuum of response selectivity of large-filed stimuli. Journal of Neurophysiology (1991) 65: 1329-1345
Schaafsma, S.J., Duysens, J.: Neurons in the ventral intraparietal area of awake macaque monkey closely resemble neurons in the dorsal part of the medial superior temporal area in their responses to optic flow patterns. Journal of Neurophysiology (1996), 76(6), 4056-4068.
Different areas of the primate cortex are associated with vestibular information processing. These areas are the parieto-insular vestibular cortex in the retro-insular
field, the area 2v in the parietal cortex, two areas inside area 3a (3aNv and 3aHv), parts of area 6, a field inside the cingulate cortex and the visual posterior sylvian area (VPS). Thalamic input to these areas runs mainly via the proprioceptive relay nuclei of the ventro-posterior complex, the posterior part of the ventro-posterior complex and the pulvinar. The vestibulo-thalamic fibres terminate mainly inside the centrum medianum, the central lateral nucleus, the ventrolateral nucleus, and the ventral posterior nucleus. Therefore we suggest that a great deal of vestibular information reaches the cortex by way of the proprioceptive pathways.
Our data from different primates show that all vestibular cortical areas have certain similarities in their cortico-cortical, thalamo-cortical and cortico-vestibular connections. An elaboration of these similarities reveals a vestibular-thalamo-cortico-vestibular circle that is suggested to form a basic system for the calculation of body and head-in-space movements. This circle has strong input to the parietal cortical fields and may update the information required by this cortical region to generate the space centered consciousness.
It has been shown that sudden changes in the horizontal disparity of a large textured scene result in vergence responses with ultra-short latencies (~60ms in monkeys). We recorded single unit activities in the medial superior temporal (MST) area of the cortex during horizontal disparity steps that evoked these vergence responses and found neurons that are activated 50 - 70ms after the onset of the horizontal disparity steps.J To obtain their disparity tuning curves, horizontal disparity steps (crossed and uncrossed, ranging in amplitude from 0.5! to 6.0!) were applied 50ms after a centering saccade. Some of the neurons 30%) had disparity tuning curves similar to "disparity-selective" neurons described previously in V1, V2 and MT of the monkey, and others (~20%) discharged more closely in relation to the motor behavior (vergence) than to the sensory stimuli (disparity). The disparity tuning curves of such "vergence-related" neurons closely resembled those of the short latency vergence responses. Some of these neurons discharged early enough to have some role in
producing the very earliest of the vergence responses.
The moving observer receives a pattern of motion on the retina that is informative of his own movement and the layout of the environment. In addition to this information in the optic flow extra-retinal signals are at hand to inform the observer about his self-movement. Different strata can be distinguished in the interplay between visual and non-visual signals concerning self-movement. At one level, the interaction between eye movement signals and retinal flow signals helps to perceive one's direction of self-motion during eye rotations. At another level, eye orientation signals are required to determine the direction of heading relative to the head instead of relative to the retina. Finally, given the rather long processing time for heading direction from flow, extra-retinal signals help to counter the errors that arise from such a lag. A common framework to understand such interactions could be gain-field type modulation by eye position and eye velocity signals of retinal flow sensitive units.
Psychophysical experiments to test these ideas will be presented.
Self-movement perception relies on the integration of multi-sensory and motor cues about heading direction. The visual information in optic flow must be integrated with non-visual sensory and motor signals from the vestibular system and from eye movement control to support heading perception. We have used single neuron recording in behaving monkeys to assess MST's role in this process.
We recorded responses to optic flow stimuli simulating the visual scene during self-movement, with or without real whole-body translational movement. We have found evidence for two neuronal subpopulations in MST. One shows larger responses when optic flow is presented with whole-body movement, the other shows smaller responses when they are presented together. We hypothesize that these support the perceptual differentiation of self-movement from the movement of objects in the visual environment.
As one moves through the visual environment, pursuit eye movements add rotation motion to the retinal image of the optic flow field, displacing the center of motion and potentially confounding heading perception. We have studied the impact of pursuit on heading selective responses in MST, presenting optic flow stimuli with centers of motion at 8 different locations around the visual field during pursuit in 8 different directions. These experiments showed that almost all MST neurons showed substantial changes in their heading selectivity during pursuit. However, the population response of the neurons studied showed little change in their heading responses between fixation and all directions of pursuit.
Thus, our studies show that single neurons in cortical area MST combine visual information with vestibular and eye movement signals to maintain a robust representation of the heading of self-movement. Our studies also suggest that MST contains distinct functional subpopulations that have different roles in visual motion
processing. Combined with the notion of population encoding, we see the potential for dynamic interactions in MST based-on different groups of neurons contributing more of less robustly to the population response under different circumstances.
Motion processing in primate visual cortex clearly embodies hierarchical processing, progressing from very local analysis in V1 to very "global" analysis in area MST and parietal areas. One long-standing hypothesis is that the wide-field motion detectors in area MST are used in analysis of self-motion from optic flow. We have tested this hypothesis by artificially activating area MST in monkeys performing a two-alternative heading discrimination task using optic flow displays simulating linear translation towards a 3D dot cloud.
Monkeys showed reliable and sensitive performance on this task, similar to that of human observers. Also, our monkeys were able to largely compensate for smooth pursuit eye movements, again in a manner similar to that of human observers on such tasks. Together, these psychophysical data suggest that our monkeys are perceiving "virtual" self-motion from our displays.
We identified areas in MST which were selective for the horizontal axis of heading, and placed a microstimulation electrode in the center of such regions. Low-amplitude, high-frequency pulse trains were used to locally activate these regions, and the effect on discrimination performance was measured. We found that microstimulation in MST significantly altered heading perception of heading in the majority of cases. The effects were dominantly to bias the monkey towards left or right headings, and to a lesser extent to reduce sensitivity to the visual stimulus. However, the direction of the induced bias was incompletely predicted by the tuning of the neurons being activated. This suggests that MST is directly involved in the perception of heading, but this involvement might be more complex than anticipated from the simplest rate coding or vector models.
In addition, the effect of microstimulation was found to depend, on average, on smooth pursuit eye movements. Two forms of interaction were seen. First, the magnitude of stimulation-induced bias tended to be larger on trials which included smooth pursuit. Secondly, the direction of the bias induced by microstimulation tended to favor the direction of the pursuit. This pattern of results is consistent with MST being involved in the process by which ongoing eye movements are taken into account by the visual system in forming the perception of heading.
Neurons in the medial superior temporal (MST) area of the macaque monkey respond to complex visual motion patterns such as, for instance, optical expansions. Such motion patterns (optic flow fields) are normally experienced during self-motion and can be used to estimate the direction of heading. MST neurons show sensitivity for the direction of heading in an optic flow pattern suggesting an involvement in the determination or control of self-motion. This leads to questions of how the direction of heading is represented in area MST and how the sensitivities of optic flow processing neurons in MST are established.
In many areas of the brain behaviorally relevant parameters are represented in the form of a topographic map. By analogy, such a map has been hypothesized also for the representation of heading. Questions then arise about the structure of this map and the
properties of its constituents.
The talk will present a neural model for heading detection form optic flow. The model claims that the direction of heading is encoded in a distributed fashion in a computational heading map which consists of optic flow selective neurons that resemble MST cells. The model makes predictions for the behavior of single neurons and of neuronal populations that are involved in heading estimation. These predictions are discussed in relation to recent electrophysiological findings.
Experiments will be discussed examining the role of pursuit eye movement and vestibular canal signals in self-motion perception. When we move forward in the world the visual scene expands on the retinas, a stimulus that is often referred to as optical flow. This optic flow stimulus can be used to perceive the direction of self-motion. However, if the gaze direction is also rotating during translation through the environment, as would occur when fixating an earth-bound object, then the retinal image is disrupted by the additional retinal motion caused by the gaze rotation. Psychophysical studies have shown that extraretinal pursuit signals are used to compensate for the motions caused by eye movements. However, during locomotion the gaze often rotates due to head turns as well as pursuit eye movements. We determined that extraretinal signals about head turns are also used to correctly perceive the direction of heading in humans. Furthermore, we found that the most accurate heading
perception is achieved when three extraretinal signals are simultaneously present, vestibular canal stimulation, neck proprioception and an efference copy of the motor command to move the head. These three signals are all present during active head turns.
In related neurophysiological experiments we examined the effects of pursuit eye movement and vestibular canal signals on cells in area MSTd which are selective for optical flow stimuli. These cells are tuned for the focus of an expanding flow field, and this focus indicates the direction of self-motion when gaze is not moving. We have found that extra-retinal eye pursuit signals shift the focus tuning curves of these cells to enable them to compensate for the visual motions due to eye rotations and still code the correct self-motion direction. Recently we have also found that vestibular signals, indicating gaze shifts due to head rotation, produce the same
compensation in area MSTd neurons. These experiments indicate that area MSTd neurons likely contribute to the computation of self-motion perception and compensate for both eye and head turns. However, as indicated above, complete perceptual compensation in humans requires that the vestibular signals be present in conjunction with neck proprioceptive and efference copy signals. In our physiological experiments the MSTd neurons showed a high degree of compensation when vestibular signals were present alone. Thus, although area MSTd is no doubt heavily involved in the computation of self-motion perception, it may not be the only stage for the computation of this percept, which likely also involves other cortical areas.
The majority of neurons in the macaque ventral intraparietal area (VIP) responds selectively for optic flow stimuli mimicking forward or backward motion. Responses to optic flow stimuli are often not compatible with a neuron's frontoparallel stimulus preference and its RF properties. The vast majority of neurons reveals an influence of
the location of the focus of either expansion or contraction on their responsiveness. The population of neurons is capable of encoding the location of the focus of the optic flow in the visual field. The results strongly suggest an involvement of area VIP in the analysis of self-motion information, at the same time leading to the question, why
two nearby areas within the dorsal stream of the primate visual system, i.e. areas MST and VIP, are specialized for the processing of optic flow stimuli. There are indeed resemblances between the two areas. Both, area MST and area VIP, receive strong projections from area MT. Neurons in both areas have very similar visual response properties. On the other hand profound differences make both areas distinct. The most prominent difference between area VIP and area MST is the responsiveness to tactile stimuli found in area VIP. These tactile responses are especially interesting in the sense that they are often directionally selective, with comparable PDs for visual and tactile stimulus modalities. In a parallel study, we could show, that many neurons in area VIP are even trimodal, i.e. they respond to visual, tactile, and vestibular stimulation. In such case, on-directions for all three sensory modalities were co-directional. This multi-modality of sensory responses led us to ask about the coordinate system in which sensory information is encoded in area VIP. It indeed turned out, that many VIP neurons encode visual spatial information in head-centered coordinates, while such type of encoding is not found in area MST. A functional hypothesis reflects on the rôle of area VIP in the encoding and visual guidance of movement in near-extrapersonal space. This consideration is supported by recent anatomical findings showing that there exist distinct projections between parietal and premotor cortex and especially connections between area VIP and a region of premotor cortex controlling head and neck movements. Therefore, a specific function of area VIP could be to guide movements, based on multimodal sensory input, in order to head for objects of interest in near extrapersonal space. Such a function would be different from area MST, where both parts of visual space are equally represented, and could argue for two different optic flow areas coding different parts of extra-personal space.
In theory we can never be completely sure about our sense of orientation with respect to gravity. Especially if we omit vision, this proves to be true. This problem is based on what Newton called the equivalence principle: there is no way to distinguish gravity from other forces. We can not discern between sitting still on earth, or being accelerated in a rocket far away from earth, for example. Hence, our sense of verticality can only be an estimate obtained by deduction. To "explain" the phenomena related to spatial (dis)orientation several models have been described. The most recent models share the concept of an internal model, that, based on previous experiences, should be able to give a better judgement on (e.g.) verticality than sensor afferents only can. There are, however, many requirements that have been obscured so far, and not all necessary solutions have been found yet. This paper will be an attempt to explain these requirements, with emphasis on the fundamentals of (modelling) spatial orientation perception.
The aVOR generates compensatory eye velocities for head movements. In three dimensions, this compensation comprises generating an eye velocity magnitude, which counters head velocity with a gain, depending on subject and species. The direction of the eye velocity vector on the other hand has position dependent axis tilts. In this study, we develop a model that incorporates the geometric organization of the canals and first order canal-endolymph dynamics. This model then drives a vector velocity position integrator and a plant incorporating muscle pulleys. The pulleys, characterized by a pulley coefficient, rotate the torque vector and maintain eye rotation in accord with Listing's law during saccades (Raphan, 1998). Using this model, we show how the effects of canal plugging can be modeled as an alteration in the low frequency 3-db roll-off and corresponding dominant time constant, without the need to postulate spatial adaptation of the aVOR. By incorporating a dynamic model of the canals into the three-dimensional canal system, the spatial responses of eye velocity can be predicted over a wide range of frequencies of head movement. The model also shows how eye position dependent tilts of the eye velocity vector can be modeled using a pulley coefficient similar to that found during saccades. The model, therefore, affords a simplified view of spatial and eye position dependence observed in the dynamics of the aVOR.
In the last 15 years interest has been resurrected in three-dimensional approaches to the control of eye movements. This was boosted by the fact that three-dimensional eye movement analysis has become practical with the development of the magnetic-field search-coil technique. New analytical approaches have made the mathematics of eye rotations and coordinate transformations more tractable and intuitive. Strabismus, labyrinthine dysfunction and brain disorders leading to nystagmus and other eye movement disorders are ubiquitous clinical problems and demand a three-dimensional approach for their understanding. This is especially true when dealing with vestibular problems. The vestibular system is intrinsically 3D trying to stabilize the retinal image in all 3 rotational degrees of freedom. Under pathological conditions, we often do find spontaneous or elicited eye movements with torsional components.
The key for understanding vestibular induced eye movements has been found in the early sixties. Since then we know that electrical stimulation of single semicircular canal nerves induces eye movements roughly in the plane of the canal. If more than one canal are stimulated the different canals combine at least roughly linearly to drive the eyes. Thus, if multiple canals are stimulated, the slow phases should be in a direction that is a weighted vector sum of the axes of the involved canals. Using this premise one can stimulate the vestibular system in numerous ways and relate the resulting eye movements to the function or dysfunction of single semicircular canals. The presentation will give an overview on the current techniques of 3D vestibular testing including low and high velocity head movements in 3D, 3D calorics, and diverse methods of inducing positional nystagmus. Finally, data will be presented on how these new techniques can be used to explore vestibulardisorders.
A true three-dimensional optokinetic stimulus can be provided by a planetarium projector. This projection system is a small motor-driven sphere with drill holes. The sphere is illuminated by a halogen lamp. The projection pattern consists of numerous dots in the dark laboratory environment and provides a virtually full-field random dot pattern rotating at constant speed about selectable spatial axes. The motor axis can be oriented by means of a gimbal system in any spatial direction. The planetarium preserves rotation of equal angular velocity in all spatial quadrants. During an experiment, the planetarium is placed above the animal's head. The axis can be placed vertically or oriented at different angles in the horizontal plane. The motor can be operated at various speeds and oscillated clockwise and counterclockwise.
See also:
Leonard, C.S., J.I. Simpson and W. Graf. Spatial organization of visual messages of the rabbit's cerebellar flocculus. I. Typology of inferior olivary neurons of the dorsal cap of Kooy. J. Neurophysiol.. 60: 2073-2090, 1988.
Graf, W., J.I. Simpson and C.S. Leonard. Spatial organization of visual messages of the rabbit's cerebellar flocculus. II. Complex and simple spike responses of Purkinje cells. J. Neurophysiol.. 60: 2091-2121, 1988.
Wylie, D.R.W., T.-K. Kripalani and B.J. Frost. Responses of pigeon vestibulocerebellar neurons to optokinetic stimulation. I. Functional organization of neurons discriminating between translational and rotational visual flow. J. Neurophysiol. 70: 2632-2646, 1993.
Wylie, D.R.W. and B.J. Frost. Complex spike activity of Purkinje cells in the ventral uvula and nodulus of pigeons in response to translational optic flow. J. Neurophysiol. 81: 256-266, 1999.
My talk will review some basic facts about eye and head kinematics, including the distinction between translatory and rotary motion, noncommutativity, Listing's law and its generalizations. I will also discuss different possible models and the roles of neural circuitry and muscle geometry in eye control.
The magnetic search coil technique has become the standard technique for eye movement recordings with high temporal and spatial resolution. Current magnetic search coil systems are based either on two or three magnetic fields, which are arranged in space quadrature and driven at different frequencies, or at one frequency but in phase quadrature. The optimal calibration procedure of the sensor coils on the eye may depend on the type of magnetic field system used, the characteristics of the sensor coil and the experimental model. While in human subjects the Jansen-Collwijn annulus is widely used as sensor device, coil implants are usually used in animal studies for two- or three-dimensional eye movement recordings. These implants must comprise two individual sensor coils in different planes and exhibit certain other characteristics to allow for reliable recording of calibrated three-dimensional eye movements. In this workshop we will focus on techniques developed for chronic recording of calibrated three-dimensional eye movements in rhesus monkeys.
While the development of video systems for the recording of 3d eye movements has made large advances, such systems are currently still not completely accepted by the scientific community. The reasons for the slow acceptance are in part due to the still high (although decreasing) cost of such systems, and in part to a number of questions concerning the data analysis of video-based images of the eyes. We will give an overview over the hardware used in such systems, which is mainly based on standard video equipment and therefore limited to recording speeds of 50 to 60 Hz. New technologies, which are currently implemented in test systems, will increase the possible sampling rate up to 400 Hz. We will also ouline methods of the data analysis, and which points are still to be solved by future generations of video systems.
Spatial orientation and navigation require the 3D reconstruction of head movement in space from multisensory information and the combination of these integrated multimodal information with spatial memory.
Our group has performed several types of experiments to address the following questions:
To answer these questions several experiments have been performed:
a) Brain imaging recordings (3 Tesla fMRI) of activation induced by galvanic stimulation and during tasks requiring the perception of the subjective straight ahead;
b) Brain imaging recordings (PET Scan) during memorized navigation tasks.
These results will be described and discussed in the general frame of a theory suggesting that the brain does not code only position in space "cognitive maps", or even actions, using both dynamic memory process and maps.
BERTHOZ A, ISRAËL I, GEORGES-FRANÇOIS P, GRASSO R & TSUZUKU T. (1995) Spatial memory of body linear displacement: what is being stored ? Science, 269: 95-98.
BERTHOZ, A, VIAUD-DELMON, I. & LAMBREY, S. (1999) Spatial memory during navigation : what is being stored, maps or movements ? In : Galaburda, & Kosslyn, S.M. (eds) Languages of the Brain. Harvard University Press (in press)
GHAEM, O., MELLET, E., CRIVELLO, F., TZOURIO,N., MAZOYER, B., BERTHOZ, A. & DENIS, M. (1997) Mental navigation along memorized routes activates the hippocampus, precuneus and insula. NeuroReport, 8: 739-744.
GRASSO, R., GLASAUER, S., TAKEI, Y. & BERTHOZ, A. T(1996) The predictive brain: Anticipatory control of head direction for the steering of locomotion. NeuroReport, 7: 1170-1174.
ISRAËL, I., RIVAUD, S., GAYMARD, B., BERTHOZ, A. & PIERROT-DESEILLIGNY, C. (1995) Cortical control of vestibular-guided saccades in man. Brain, 118: 1169-1183.
IVANENKO, Y., GRASSO, R., ISRAEL, I., & BERTHOZ, A. (1997) The contribution of otoliths and semicircular canals to the perception of two-dimensional passive whole-body motion in humans. J. Physiol. (Lond) 502:223-233.
LOBEL, E., KLEINE, J.F., LE BIHAN, D., LEROY-WILLIG, A. & BERTHOZ, A. (1998) Functional MRI of galvanic stimulation. J. Neurophysiol. 80: 2699-2709.
VALLAR, G., LOBEL, E., GALATI, G., BERTHOZ, A., PIZZAMIGLIO, L. & LE BIHAN, D. (1999) A fronto-parietal system for computing the egocentric spatial frame of reference in humans. Exp. Brain Res. (in press)
VIAUD-DELMON, I., IVANENKO, Y., BERTHOZ, A., & JOUVENT, R. (1998) Sex, lies and virtual reality. Nature Neurosci. 1:15-16, 1998.
There are now about a dozen visual areas which can be demonstrated reliably in human visual cortex, using functional magnetic resonance imaging (fMRI). The most posterior regions are occupied by classically retinotopic areas, including V1, V2, V3/VP, V3A and V4v. Anterior to that is a strip of areas with crude retinotopy, which is nevertheless reliable and consistent across subjects when using high-field fMRI and extensive signal-averaging. These crudely retinotopic areas include V7, LOC/LOP and V8. AreaV8 is involved in color processing, and the degree of retinotopic separation between V4v and V8 is currently controversial. The human region (LOC/LOP) which is topographically similar to dorsal V4 (V4d) in macaque is marked by highly unusual retinotopy, which is quite different than that described for macaque V4d.
All these areas will respond better to moving than to stationary stimuli, to some extent. Among them, V3A is most biased for moving stimuli, and human V3 is among the least biased. In the same moving/stationary test, area 'MT+' (to include putative homologues of MT, MST and other satellite areas) is even more biased for moving stimuli. A wide range of tests have been performed in human MT+, including tests of motion illusions. In essentially all tests, human MT+ appears quite similar to human areas MT + MST. There are at least two additional areas located anteriorly and superiorly which are non-retinotopic and appear involved in motion perception, especially wide-field motion tests.
Visual spatial attention is crucially involved in motion perception, because of the dominant role played by related eye movements. In recent tests of visual spatial attention, we were able to reveal this 'spotlight' of attention in the flattened cortical maps, relative to maps of the retinotopy itself, in the same subjects. When the stimulus was small, we found relatively decreased attentional signals in V1 compared to that in extrastriate cortex---but when the stimulus was made larger we found robust attentional effects in V1 as well. The 'spotlight' of attention had approximately the same level of retinotopic specificity as the sensory retinotopy. We also found apparent decreases in activity at each retinotopic region, when attention was directed elsewhere---in sum, these decreases far outweighed the increases at the attended location. This data supports the idea that visual spatial attention works by a 'push-pull' mechanism; enhancing neural activity at a given region of interest but suppressing neural activity elsewhere.
Evidence is presented that the multisensory parieto-insular cortex is the human homologue of the parieto-insular vestibular cortex (PIVC) in the monkey (1) and is involved in the perception of verticality and self-motion. Acute lesions (patients with middle cerebral artery infarctions) of this area caused contraversive tilts of perceived vertical, body lateropulsion, and, rarely, rotational vertigo (2). Brain activation studies using positron emission tomography (PET) or functional resonance tomography (fMRI) showed that PIVC was activated by caloric irrigation of the ears or by galvanic stimulation of the mastoid. This indicates that PIVC receives input from both the semicircular canals and otoliths (3,4). PIVC was also activated during small-field optokinetic stimulation, but not when the nystagmus was suppressed by fixation (5). The vestibular cortex intimately interacts with the visual cortex to match the two 3-dimensional orientation maps (perception of verticality) and mediates self-motion perception by means of a reciprocal inhibitory visual-vestibular interaction (6).
The vestibular system - a sensor of head accelerations - cannot detect motion at constant velocity and thus requires supplementary visual information. The perception of self-motion during constant velocity is completely dependent on visually induced vection (e.g., circularvection, CV). CV is induced by large-field visual motion stimulation during which the subject perceives the actually moving surroundings as being stable, while he is being moved. To determine the unknown cortical visual-vestibular interaction during CV, a PET activation study was conducted in human volunteers in which PET images of activated cortical areas during motion stimulation without CV were compared to those with CV (6). If CV were mediated by the vestibular cortex, one would expect that an adequate visual motion stimulation would activate both the visual and the vestibular cortex. Contrary to this expectation, however, it was shown for the first time that visual motion stimulation with CV not only bilaterally activates a medial parieto-occipital visual area separate from MT/MST, but significantly deactivates the PIVC. This mechanism of an inhibitory interaction allows a shift of the dominant sensorial weight during self-motion perception from one sensory modality (visual or vestibular) to the other, depending on which mode of stimulation prevails: body acceleration (vestibular input) or constant velocity motion (visual input).
(1) Grüsser et al. (1990) J Physiol 430: 537-57
The human cortex, as the macaque cortex, contains many regions responsive to retinal motion (Dupont et al., J. Neurophysiol. 1994). We have set ourselves as task to understand why, using a dual approach: studying different types of motion stimuli and different types of tasks performed with moving stimuli. It is well known that analysis of retinal motion can serve many behavioral purposes (Nakayama, Vision Res. 1985). Optic flow, generated by relative movement of the observer with respect to the environment is a rich source of information both on the motion of the observer and on the motion in and the 3-dimensional structure of the environment. We have studied the human cortical regions involved in extracting 3D from motion by comparing MR activation evoked by 3D and 2D moving stimuli, independently of their rigidity (factorial design). Regions involved in processing 3D motion stimuli include a number of regions along the intraparietal sulcus, a lateral occipital region and the human homologue of MT/V5+ (Zeki et al., J. Neurosci. 1991). This latter was predicted by our monkey recording studies indicating that MT/V5 neurons signal the direction of speed gradients, which correspond to tilt in depth of planar surfaces (Xiao et al., Eur. J. Neurosci. 1997). The heading task compared to a control detection task involved posterior occipital regions, including the human homologue of V3A (Tootell et al., J. Neurosci. 1997), and a dorsal parietal region.
As part of presurgical evaluation of intractable partial epilepsies, chronic EEG recordings using intracerebral or subdural macroelectrodes are routinely performed in order to define the limits of the epileptogenic area. During such procedures, electrical stimulations can be carried out to map functionnaly eloquent cortical areas, and much of what we know about functional localization in the human brain was obtained in this manner. However, only few studies have dealt with electrically-induced eye movement and vestibular sensation. It was shown that contraversive eye movements were reliably observed when stimulating the dorsolateral surface of the precentral cortex, and that vestibular responses could be elicited by stimulating the superior temporal gyrus and the insular cortex.
We have reviewed the results of electrical stimulations performed in 233 epileptic patients explored by means of stereotactically implanted intracerebral electrodes, focusing our attention on cortical areas from where oculomotor and vestibular responses were elicited. The methodology used, i.e. the stereo-electroencephalography, allowed to record and stimulate various mesial and lateral cortical areas, including sulcal cortex, and to anatomically localize all relevant electrical stimulation sites according to their stereotactic coordinates in Talairach's atlas.
Only a few patients experienced at one ore more stimulation site versive eye movement (n = 15, 12 patients) or vestibular sensation (n = 57, 29 patients), thus limitating the significance of our findings. However, it remains that: i) there is a relatively limited region located at the intersection of the precentral sulcus with the superior frontal sulcus in which contraversive eye movements can be easily elicited by electrical stimulation; this area could appear to be homologous to the macaque monkey's low-threshold frontal eye field, but located more posteriorly, apparently within Brodmann's area 6; ii) vestibular responses are obtained by stimulating various cortical areas, of which most are located at the vicinity of the sylvian fissure (suprasylvian opercular region, superior temporal gyrus, inferior parietal cortex at the vicinity of the insula, supra-marginalis gyrus); other areas include notably portions of the intraparietal cortex, the posterior bank of the central sulcus, and the cingulate gyrus, thus suggesting some degrees of homology with the simian vestibular cortex.
Optic flow is the pattern of velocity that is projected onto the retina during a relative motion between the observer and a visual scene. It has been proved to play an important role not only in motor and posture control but also in the perception of 3D motion and depth. The pattern of retinal velocity locally depends on the distance between the observer and visual scene components. It was shown that spatial variations of optic flow, also called motion parallax, are utilised by the visual system for the perception of 3D self and object motion, and also for the perception of the 3D structure of objects (Rogers and Graham, 1979 [1]). In this presentation, we will review results on the cortical correlates of optic flow and 3D structure from motion.
In monkey, the study of optic flow processing has led to describe a sensitivity to retinal velocity spatial variations in different dorsal areas, in particular MSTd (Duffy and Wurtz, 1991 [2]). Also, recent studies by Xiao et al. (1997) [3] and Bradley et al. (1998) [4] suggest a specific role of monkey area MT/V5 in the processing of 3D structure and 3D orientation from optic flow.
In man, Positron Emission Tomography (PET) studies have compared the presentation of coherent optic flow (expansion flow or coherent motion in one direction) versus incoherent (dots moving each in a randomly selected direction) visual motion stimuli, either in wide or small field (De Jong et al., 1994 [5]; Cheng et al.,1995 [6] and McKeefry et al.,1997 [7]), and recently we performed an fMRI study to explore the neural correlates of the perception of structure from motion parallax, using distributions of dots randomly spread over a moving surface [SHFJ-CEA, LPPA-CNRS/CdF]. A common finding to all these studies is the involvement of the dorsal stream in coherent motion processing, in particular the dorsal occipital gyrus (V3/V3A), and lateral posterior parietal foci that seem to be specific for coherent motion as compared to random motion. However, it is still controversial whether human MT/V5 is specifically involved in the extraction of depth from optic flow. Interestingly, comparisons between coherent and incoherent motion stimuli also showed activity in the fusiform gyrus (ventral pathway) that was elsewhere [8] proved to participate in object and shape recognition.
1 - Rogers BJ and Graham M. Perception 8: 125-134, 1979.
The human frontal eye fields (FEF) is a cortical structure presumed to be involved with saccadic eye movement generation. Despite wide acceptance for this general claim, there is uncertainty regarding the exact location and function of the FEF. Recent experiments conducted on patients with frontal lobe lesions and subjects undergoing transcranial magnetic stimulation (TMS) will be discussed. These studies provide evidence that the FEF is involved not only with voluntary saccadic eye movement generation but also with the inhibition of reflexive glances. Consistent with recent functional imaging studies, the neuroimaging results from our patients and TMS subjects indicate that the human FEF is located in the cortical region 2 cm anterior to the motor hand area.
The vestibular sensory epithelium functions to detect the position and motion of the head in space. These sensory signals are used for perception of self motion and for producing a variety of reflexes that help maintain balance and equilibrium. In order to have an accurate estimate of the movement of the body in space, vestibular estimates of
head movement need to be converted into estimates of body movement. A similar transformation of vestibular signals is required in reflex pathways related to the vestibular system that function to stabilize the position of the body in space. The firing behavior of horizontal semicircular canal related neurons in the vestibular nuclei was studied in squirrel monkeys whose head was free to move. During passive rotation of the whole body, the movement of the head and body in space was often not identical, primarily because the vestibulo-collic reflex (VCR) produced a movement of the head on the trunk that reduced head velocity. Yet the sensory signals produced by many vestibular neurons in the vestibular nuclei were unaffected by the VCR and thus generated signals that were better related to trunk velocity in space than to head velocity. Thus the transformation of vestibular signals from head coordinates into body coordinates is evident at an early stage of processing in the vestibular system.
Perceptual and reflex functions also require a distinction between sensory signals related passive, externally induced head movements and active, voluntary head movements. Passive movement of the head in space produces a variety of reflex movements of the back, limbs, neck, and eyes that stabilize the body and or gaze in space. These reflexes must be suppressed or cancelled during self-generated head movements. The vestibulo-ocular reflex (VOR) and the VCR produce reflex movements of the eyes and head that stabilize images on the retina when the head is perturbed that must be suppressed when the head is turned voluntarily to look at another image or to follow an image that is moving in space. Single units in the vestibular nuclei related to the VOR and the VCR were not sensitive to head movements that are voluntarily produced, although they were nearly as sensitive to passive head on trunk rotation as they were to whole body rotation, which suggests that vestibular signals related to active head movements were cancelled primarily by subtraction of a head movement efference copy signal.
The brain must distinguish between sensory events that are externally induced and those that are self-generated in order to develop an accurate perception of the external world and produce coordinated behavior. In the vestibular system, the distinction appears to be made by the first neurons in the brain that receive input from the vestibular nerve. Apparently, the recognition of self-generated and
non self-generated head movements is too important to be postponed until a later stage of sensory processing.
Orientation selectivity is thought to represent one of the earliest function of cortical neurons involved in visual form processing. While the mechanism is still a matter of debate, it has been shown for striate neurons that orientation processing is a dynamic process (Ringach et al., Nature , 387: 281, 1997) which may contribute to cortical plasticity as has been shown for other neuronal functions (Das and Gilbert, J. Neurophysiol., 74: 779, 1995; Wörgötter et al., Nature, 396: 165, 1998). Further, Sauvan and Peterhans (Visual Cognition, 6: 43, 1999) identified dynamic mechanisms of orientation selectivity in prestriate cortex. They studied the effect of body tilt on the orientation selectivity of cortical neurons and found prestriate neurons which showed orientation constancy with respect to the direction of gravity. This suggests that cortical mechanisms of orientation selectivity may also include information about the direction of gravity as detected by vestibular organs (otoliths) or proprioception.
In a review, possible advantages of dynamic processes of orientation selectivity will be discussed with regard to cortical representations of one- and two-dimensional contours during head or body tilt and in the light of a theory of view dependent representations of three-dimensional scenes.
Unlike primates, rabbits have no fovea. Therefore their eye movements are solely used for minimising retinal slip (VOR, OKR), while foveating movements, like saccades or pursuit, are absent. We studied what eye movements the rabbit makes in response to transparently moving visual patterns. We found that the response is fully determined by the weighted intensities of the stimulating patterns. When concurrent vestibular stimulation was given, this weighting process changed, in such a way that the visual stimulus that was in register with the vestibular input prevailed. Since the strength of the vestibular influence did not depend on the absolute intensity of the visual patterns, we propose that the transparency trade-off occurs at a level that is upstream of the visual-vestibular interaction.
We further investigated the role of the cerebellar flocculus in the processing of transparent motion. Floccular injections of the Ach-agonist Carbachol are known to increase the gain of the OKR. After such injections the effect of vestibular stimulation on the processing of transparent motion was reduced, indicating a role for the flocculus in this process.
Finaly, single unit recordings from floccular Purkinje cells were done during transparent stimulation. A large fraction of the cells responded to the weighted input of both stimulating patterns.
If numbers matter, the projection connecting the cerebral cortex with the cerebellum is probably one of the most important pathways through the central nervous system. Its extensive development in phylogenetic scale parallels that of the cerebral hemispheres and the cerebellum and it accompanies the improvement in motor skills, suggesting that this system might play a decisive role in the generation of skilled movement. This view was first suggested more than 20 years ago by Allen and Tsukahara, but soon forgotten for two reasons. First, the beauty of the concept of motor learning and the role of the climbing fiber system mesmerized the cerebellar community and prompted neglect of the second type of afferents, the mossy fiber system, largely dependent on the cerebro-pontine projection. Second, the cerebrocortical community, struck by the explosion of the number of known cerebrocortical areas, was taken up by the analysis of these newly discovered areas and the possible function of the complex patterns of intracerebral connections between them, while largely ignoring their subcortical projections.
Several developments have begun to revitalize interest in the role of cerebro-cerebellar communication. Suffice it to mention the compelling evidence for cerebellar contributions to cognition, obviously necessitating exchange of information between the cerebellum and those parts of cerebral cortex, traditionally associated with cognitive
functions. This contribution will try to convey evidence for the notion that an understanding of skilled movement and cognition requires an understanding of the functional architecture of cerebro-cerebellar communication. Towards this end recent advances from anatomical, electrophysiological and clinical studies, approaching cerebro-cerebellar communication at different levels will be presented. Special consideration will be given to the role of the pontine nuclei, the interface between cerebral cortex and the cerebellum, the understanding of which seems to be the key to a full understanding of the way cerebro-cerebellar communication works.
Exposure to extended periods of weightlessness in orbital flight has profound effects on the neurovestibular system and influences head and eye movements, postural control, and spatial orientation. The associated space motion sickness is among the earliest of the signs of adaptation to this new environment. This report both reviews the prominent neurovestibular phenomena associated with going into space and returning to earth and relates the issues to vestibular compensation and rehabilitation. New results from the Spacelab SLS-2 mission are included, showing significant reductions in post-flight ocular counterrolling and changes in ocular counterrolling left/right asymmetries after 2 weeks in space.
The problem of visual space perception is the recovery of the location, shape, size, and orientation of objects in the environment from the pattern of light reaching the eyes. The visual system uses spatial differences in the two eyes' retinal images to glean information about the 3-D layout of the environment; this is called stereoscopic vision. The horizontal differences in the two eyes' images can be represented by the horizontal size ratio (HSR), the ratio of horizontal angles subtended by a small surface patch. Changes in HSR are perceived as changes in slant, but the slant of a surface patch cannot be determined from HSR alone because HSR is a function of the patch's position relative to the head as well as its slant. There are a variety of additional signals available to the visual system that can in principle allow veridical slant estimation given the observed HSR. We have shown experimentally that the visual system also uses the vertical size ratio (VSR) and its derivative, the felt positions of the eyes, and texture gradients to help determine the slant of a surface patch. We have also shown that haptic feedback can change the weights given to these various signals.
The goal is to achieve the seemingly impossible; to entertain you with Listing's Law. To achieve this goal no data will be shown. Instead key properties will be demonstrated using live computer simulations in VRML, the virtual reality modelling language. This will include a brief demonstration of Listing's law, what the eye sees when it obeys this law and the consequence of this perception on the generation of movements. The generalised law during vergence, L2, will be simulated as well as its consequences on the perception of slant.
Disruption of space perception subsequent to cortical lesions has long been thought of as unitary phenomenon such as "hemispatial neglect". Recent clinico-anatomical evidence, however, suggests a modular organization of space perception and space intergration with respect to: global and local features, far and near space, perception and action, and perceptual and "mental" space representation.
In 1915, Cajal and Sanchez published their famous monograph on the organization of the insect visual system. In it, these authors drew the specific analogy, based on similarities of neuronal organization, between the retinae of mammals, cephalopods and the optic lobes of insects, especially flies, and proposed that even animals equipped with compound eyes performed sophisticated visual processing. How sophisticated this may be is only now becoming apparent from studies of visual learning, visual navigation, and recordings from interneurons, many of which show surprising similarities with neurons from mammalian visual centers.
I shall discuss recent examples from my own and other labs, particularly that of Professor Srinivasan and collaborators in Canberra, showing that insects not only can associate different visual cues and remember combinations of these, but that they are able to make decisions in visual navigation based on perceived generalizations. Even insects such as flies, not usually associated with "intelligence," can discriminate and learn small difference of pattern orientation, a phenomenon that would appear to find a neural counterpart in small-field pyramidal cell-like orientation detectors. These neurons comprise a brain center the structure of which bears some surprising similarities with striate cortex.
Useful comparisons between mammalian and insect visual systems can be drawn from two other levels. One pertains to systems of identified retinotopic neurons that comprise early motion computing circuits that in the insect reside in neuropils equivalent to the mammalian outer and inner plexiform layers. These neural circuits can be compared with systems of motion-sensitive ganglion cells and amacrines described from rabbit retina. Another level of comparison is at a higher center thought to be involved in learning and memory, and in context-dependent sensory association. This region, called the mushroom body, appears important in the establishment of visual and olfactory place memory. For example, in cockroaches, the recognition of a hidden target by virtue of its relationship with distant visual cues is abolished after lesions of specific regions of the mushroom bodies whereas lesions do not perturb the learned association between a visual stimulus and a second modality presented at the same spatial location.
These few examples suggest that, to some depth, the insect visual system is not only constructed along the same lines as that of many vertebrates, but that visual brain centers in insects can perform elementary visual association tasks that are thought to underlie low level form recognition, visual association, and cognition.
Werner Graf, CNRS-Collège de France, Paris
Duysens J, Gabel SF, Schaafsma SJ and Schubert M. University of Nijmegen, Nijmegen
Crowell JA, Banks M.S.,, Shenoy KV & Andersen RA. Visual self-motion perception during head turns. Nature Neuroscience (1998) Vol. 1 Nr 8 pp 732 - 737
W. Guldin, G. Dahrmann and J. Bäurle, Freie Universität, Berlin
Kenji Kawano, Aya Takemura, and Yuka Inoue. Electrotechnical Laboratory, Tsukuba
A.V. van den Berg, Erasmus University, Rotterdam
Charles J. Duffy, Univ. of Rochester Med. Ctr., Rochester
Ken Britten, University of California, Davis
Markus Lappe, Ruhr-Universität, Bochum
Richard Andersen, California Institute of Technology, Pasadena
Frank Bremmer, Ruhr-Universität, Bochum
Jelte E. Bos, TNO Human Factors Research Institute, Soesterberg
Theodore Raphan, Brooklyn College of the City University of New York, Brooklyn, NY
Michael Fetter, Eberhard-Karls-University, Tübingen
Werner Graf, CNRS-Collège de France, Paris
Simpson, J.I., C.S. Leonard, and R.E. Soodak. The accessory optic system of rabbit. II. Spatial organization of direction selectivity. J. Neurophysiol.. 60: 2055-2072, 1988.
Douglas Tweed, University of Western Ontario, London, Ont.
Bernhard J.M. Hess, Department of Neurology, University of Zürich, Zürich
Thomas Haslwanter, Inst. of Theoretical Physics, ETH, Zürich
Alain Berthoz, Collège de France, Paris
1/ What is being stored during passive or active navigation tasks in 2D space? We have particularly studied what is the interaction between otoliths and canal information for the updating of body orientation. A mobile robot has been used for these experiments.
2/ What is the respective contribution of visual, vestibular and proprioceptive cues during navigation and locomotion? These experiments have been performed using virtual reality.
3/ What are the brain structures involved in processing self motion information during navigation.
Roger Tootell, Massachusetts General Hospital - NMR Ctr, Charlestown
Marianne Dieterich, Ludwig-Maximilians-Universität, München
(2) Brandt et al. (1994) Ann Neurol 35:403-12
(3) Bottini et al. (1994) Exp Brain Res 99: 164-9
(4) Bucher et al. (1998) Ann Neurol 44:120-5
(5) Dieterich et al. (1998) Brain 121: 1479-95
(6) Brandt et al. (1998) Brain 121:1749-58.
Guy Orban, Kuleuven, Medical School, Leuven
Philippe Kahane, University Hospital, Grenoble
Ann-Lise Paradis, Serv. Fosp. F. Joliot, Orsay
2 - Duffy CJ, Wurtz RH. J. Neurophysiol. 65: 1329-1345, 1991.
3 - Xiao DK, Marcar VL, Raiguel SE, Orban GA. Eur J Neurosci 9: 956-64,
1997.
4 - Bradley DC, Chang GC, Andersen RA. Nature 392: 714-716, 1998.
5 - Cheng A, Fujita H, Kanno I, Miura S and Tanaka K. J Neurophysiol
74: 413-427, 1995.
6 - De Jong BM, Shipp S, Skidmore B, Frackowiack RSJ, Zeki S. Brain
117, 1039-1054, 1994.
7 - McKeefry DJ, Watson JDG, Frackowiack RSJ, Fong K, Zeki S.
Neuroimage, 5, 1-12, 1997.
8 - Martin A, Wiggs CL, Ungerleider LG, Haxby JV. Nature, 379, 649-652,
1996.
Tony Ro, University College, London, UK
Robert McCrea, University of Chicago, Chicago
Esther Peterhans, Department of Neurology, University Hospital Zürich, Zürich
Maarten Frens, Anil Mathoera and Hans van der Steen. Dept. Physiology, Erasmus University, Rotterdam
Peter Thier, Neurology Hospital, Univ. Tübingen, Tübingen
Lawrence Young, Massachusetts Institute of Technology, Cambridge, Mass.
Marty Banks, University of California, Berkely
Tutis Vilis, University of Western Ontario, London, Ont.
Theodor Landis, Univ. Geneva, Geneva
Nicholas J. Strausfeld, University of Arizona, Tucson
| Description of the meeting | Program | Abstracts | Organization |
Various |