RESEARCH PROJECTS IN THE LAB OF JOHN VAN OPSTAL:


1. Sound localization behavior of the blind.
2. The role of vision in human sound localization.
3. Plasticity of human sound localization: adaptation to binaural cue manipulations.
4. Plasticity of human sound localization: adaptation to monaural cue manipulations.
5. Saccade-blink interactions: behavior and neural responses in monkey superior colliculus.
6. A dynamic ensemble-coding model of the superior colliculus underlying saccades
7. Involvement of monkey inferior colliculus in the encoding of complex sounds.
8. Involvement of monkey inferior colliculus in the encoding of sound location.
9. Influence of sound intensity and sound duration on human sound localization.
10. The encoding of spectral shape cues in the human sound localization system.
11. Auditory-visual interactions underlying saccade generation in a complex scene.
12. Influence of head position on human sound localization.
13. The generation of rapid double-step saccades: displacement vs. position integrator?
14. Dynamic Audio-Visual localization during rapid eye-head gaze shifts.
15. Role of monkey auditory cortex in the learning and representation of complex sounds.
16. Effect of body tilt on neural responses in monkey inferior colliculus.


















1. SOUND LOCALIZATION BEHAVIOR OF THE BLIND
Research group: dr. A.J. Van Opstal and drs. M.P. Zwiers

Introduction
Human sound localization relies on the use of several acoustic cues: binaural difference cues in timing and intensity relate to changes of the sound source in the horizontal plane (azimuth), whereas spectral shape cues induced by the pinna geometry vary in a systematic, though complex, way with location in the median plane (elevation). It is generally accepted that the sound localization system needs to learn the mapping of these cues onto veridical spatial locations. The mechanisms by which this learning is achieved, however, are widely unknown. One potent source could be vision. The necessity of visual input to train the human auditory system, however, has been debated in the literature. Comparisons of the localization behavior of (congenitally) blind subjects and sighted controls have yielded conflicting results. Most of these studies have been confined to localization performance in the horizontal plane. In this project we have tried to extend these findings to include elevation as well, and by subjecting the subjects to a variety of stimulus conditions and different pointing behaviors.

Report:
Two sets of behavioral experiments with blind and sighted subjects have been finished and have now been published. The results indicate that under normal free-field conditions, the blind reach equal localization accuracy as the sighted, both in azimuth and in elevation (paper in EBR). Both blind and sighted employ the different sound localization cues (ITD's, ILD's and spectral cues) equally well, and we could not demonstrate the use of acoustic feedback during rapid head movements in either group. We show that the arm-pointing behaviors of the blind and sighted differ markedly. Whereas sighted subjects tend to align their index finger with their cyclopean eye, the blind appear to point from their shoulder.

In a second study we show that when background noise is added to the environment, the blinds' performance deteriorates much faster than that of the sighted subjects. Interestingly, the difference only emerges for the elevation components of the responses, while the azimuth components are virtually identical. These date indicate that the visual system aids in finetuning the mapping of the spectral shape cues for sound localization (paper in J. Neuroscience).

Publications:


Sound localization behavior of early-blind and normal-sighted listeners
Sound localization in azimuth and elevation was measured by head pointing. The auditory target was a broad-band 'buzzer', that was partly masked by broadband Gaussian white noise of 60 dBA in the background. The signal-to-noise ratio (SNR) was systematically varied between 0 dB (equally loud) and -21 dB (buzzer almost inaudible re. background).
(A,B) show the raw regression results (gain and correlation) per subject; open symbols: azimuth, closed symbols: elevation. Circles: sighted. Triangles: blind.
Panel (C) shows the results after normalization of the regression parameters, in order to get rid of the variability in baseline behavior (and allowing for averaging across subjects).
The blind and sighted were indistinguishable when comparing their azimuth localization (C, open symbols). However, in elevation the blind were clearly impaired when compared with the sighted (closed symbols).
Grey area within the two elevation curves corresponds to a difference of at least four standard deviations. At the extremes (no background noise, and at -21 dB S/N ratio) the two groups were the same. For the intermediate S/N ratios, however, the blind clearly performed much worse. As soon as background noise sets in, the blind are almost incapable of localizing elevation.
We conclude that elevation localization in the frontal hemifield requires training by vision. However, an intact visual system is not mandatory for learning to localize stimulus azimuth.


Back to top, to John's Homepage, or to our Biophysics research projects.











2. THE ROLE OF VISION IN HUMAN SOUND LOCALIZATION.
Research group: dr. A.J. Van Opstal, drs. M.P. Zwiers, and Prof. G.D. Paige (Univ. Rochester, NY)

Introduction
This project, that took place within our Human Frontiers research grant (RG0174/1998B), has studied how changes in visual input may affect human sound localization performance. In young barn owls it has been shown that prisms, that induce a retinal translation of the visual field, also lead to a concomitant shift in the auditory localization responses. In this project we have studied the effects of 0.5x magnifying glasses on pointing behavior toward sounds presented in the 2D frontal hemifield.

Report:
Experiments were conducted in the Rochester lab with nine subjects, all wearing the magnifying glasses for two to three days. Two paradigms were employed: in the first paradigm the subject pointed toward the perceived sound location while maintaining fixation; in the second paradigm the subject aimed at the target, while fixating the perceived location. The two paradigms allow to dissociate adaptation at the level of the oculomotor/visual systems from the sound localization system.
The results indicate that, apart from an appropriate decrease of the VOR gain, also the sound localization responses of the subjects underwent a clear gain change. Remarkably, only responses within the shrunken visual field (about 25 x 25 deg) decreased their response gain, with a maximal effect for parafoveal target locations. The effects were only manifest in the azimuth domain. Outside the visual field of the glasses responses kept their pre-adaptation gain values. Interestingly, the responses themselves were changed (a bias shift toward the central visual field). After the glasses were taken off, the gains returned to normal within a day. These experiments therefore demonstrate a prominent influence of visual input on sound localization, also in the adult human. The apparent absence of a change in the elevation responses in most subjects is probably due to the short exposure time.

Publications:


Adaptation of sound localization in response to compressed vision
(A,C) Responses (averaged across subjects) across the two-dimensional stimulus-response field were computed by the so-called 'local gain change', which is the difference between the locally determined slopes of the stimulus-response relation after adapting to, and before putting on, the minifying glasses. Blue colors indicate a local decrease of the slope, while dark red would indicate an increase. Note that the local gain has only decreased in the frontal field of view, roughly corresponding to the view offered by the lenses. These data show for the first time that the human auditory localization system is trained by the visual system.
(B,D) After taking off the glasses, localization returns to normal. Here, the difference was computed between the adapted state and the re-adapted state. Note that the increase in gain now extends to the entire visual field (probably because with the glasses off the field of view is much larger!).
(Data are from the two different localization paradigms: retinal-based pointing of the laser while fixating straight-ahead (bottom), vs. eye pointing by foveating the laser pointer (top). Results were quite similar for both experiments, supporting the idea that adaptation was indeed taking place within the auditory localization system, rather than in the visual, or oculomotor systems).

Back to top, to John's Homepage, or to our Biophysics research projects.











3. PLASTICITY OF HUMAN SOUND LOCALIZATION: ADAPTATION TO BINAURAL CUE MANIPULATIONS.
Research group: dr. A.J. Van Opstal and dr. P.M. Hofman

Introduction
The auditory system has to acquire knowledge about the relationships between the different sound localization cues and sound locations through learning. In this project, two experiments have been conducted to test whether long-term exposure to altered acoustic cues could induce a remapping of the acoustical-spatial relationships, thereby re-established veridical localization responses.

Report:
In the first series of experiments (paper in Nature Neuroscience) subjects had altered spectral cues by wearing binarual plastic molds for a period of up to about five weeks. At the start of the new spectral exposure (day 0) the localization responses of the subjects were abolished in elevation, while they remained normal in azimuth. Over the course of days to weeks, normal 2D localization behavior was re-established in all subjects tested. Upon removal of the molds, the original pinna cues could immediately be used. This shows that the auditory system of the adult human is capable to relearn new spectral-spatial associations. This is achieved without erasing the original representations.
In the second experiment two subjects were equipped with broadband in-the-earcanal hearing aids that inverted the binaural acoustic cues. This led to a faithful reversal of the azimuth components of acoustic targets, without affecting elevation. Neither subject was able to reverse his responses as to generate correct azimuth localization responses. As the azimuth reversal also induced a prominant front-back reversal that became manifest at every head movement, a conflict between cues may have prevented a remapping of the binaural cues. Alternatively, the change in binaural cues may have been too large for an adequate adaptive response.
Publications:



Plasticity in the human sound localization system
Summary of the experiments with one of our subjects. Each matrix respresents the data of a complete localization experiment. The thin white matrix corresponds to the average locations of the targets. The blue and cyan dots are individual responses for downward and upward directed target locations, respectively. The filled, interconnected yellow dots are the average responses for targets in the same spatial region. Perfect localization requires the yellow dots to overly the target matrix.
Upper left: localization on day 0, prior to insertion of the binaural molds, is good. Although the listener makes some systematic errors, the structure of the target distribution is well captured by the responses.
Clockwise: Localization results on successive days, starting at day 1, up to day 24. Azimuth responses were always accurate, but elevation responses were severely impaired. However, the participant gradually relearns to localize also stimulus elevation.

Back to top, to John's Homepage, or to our Biophysics research projects.











4. PLASTICITY OF HUMAN SOUND LOCALIZATION: ADAPTATION TO MONAURAL CUE MANIPULATIONS.
Research group: dr. A.J. Van Opstal and drs. M. Van Wanrooij.

Introduction
In this project we study to what extent manipulation of the acoustic cues from one ear may induce an adaptive response. In a first series of experiments, sound localization behavior under monaural conditions is studied in two groups of subjects: monauralized healthy subjects, who wear a precisely fitting rubber plug (typically more than 30 dB attenuation above 3 kHz) for a period of up to four weeks, and in monaural patients. Sound localization to broadband, high-pass and low-pass stimuli of various intensities (35-70 dB) is measured, and results are cojmpared to normal-hearing binaural listenening conditions.
The results of the experiments with the monaural patients clearly show that they relie heavily on the so-called head shadow effect. This cue, which provides an azimuth-related variation in sound level at the good ear, is, however, highly ambiguous for stimuli of unknown intensities. Yet, these patients use this cue to guide their sound localization behaviour. About half of the monaural listeners, however, have learned to use the pinna cues from their good ear to gain some localization information about azimuth. Interestingly, only these listeners were also able to localize sound elevation on the side of the good ear; sometimes nearly as well as the binaural controls. In contrast, listeners who rely entirely on the head-shadow effect cannot localize sounds at all. The results of this study have recently been published in the Journal of Neuroscience.

The results of the normal-hearing subjects engaged in a monaural plugging experiment differ in a number of ways from the really monaural listeners. First, there is an immediate effect of the plug, causing localization to be shifted substantially toward the normal-hearing side. This effect is quantified by the 'bias' of the regression line. Interestingly, the bias value does not seem to correlate with the amount of attenuation caused by the plug in the different subjects. The bias depends on intensity: the lower the intensity of the sound, the smaller the bias. This effect is particularly notable for the high-pass sounds. During adaptation, the bias gradually diminishes, again in an intensity-dependent manner. Our preliminary conclusions are that subjects may relie on spectral cues for azimuth localization of low-intensity sounds. For these sounds, a plug has limited effects. For higher sound levels, however, the ILD cues and ITD cues are in conflict, and subjects may have adapted a strategy in which they ignorde the high-frequency content altogether to determine the sound-source azimuth.
These results are currently written up as a full paper.

In a second series of experiments, thirteen subjects wore a precisely fitting mold in either their left or right ear for a period of up to four weeks, while their localization performance to different types of sounds was tested on a regular basis.

Report:
A monaural mold leads to poor localization of elevation to sounds presented on the side ipsilateral to the mold, while responses on the far-contralateral side are normal. Sounds near the midline are also affected by the mold. Eight out of 13 subjects showed a successful adaptive response to this hearing condition, whereas the five other subjects did not adapt at all.
We have also studied the acoustic properties of the molds in an attempt to understand the failure or success of the adaptation. Briefly, when the spectral correlations between the mold-induced HRTFs and the subject's original HRTFs are high, subjects do not adapt to this mold, because the spectral cues become ambiguous. Instead, the subject's responses oscillate (over time) between either set of HRTFs. When the HRTF correlations are low, a quick adaptive response results (overall, subjects regain near-normal localization within 6-10 days).
We have concluded from these results that the adaptation takes place at a central level where the spectral cues are transformed into an estimate of elevation, rather than at a stage where information from the two ears is combined. The results of these experiments have recently been published in the Journal of Neuroscience.




Publications:

  • M.M. Van Wanrooij, A.J. Van Opstal
    Contribution of Head Shadow and Pinna Cues to
    Chronic Monaural Sound Localization.
    Journal of Neuroscience, 24(17): 4163-4171 (2004).
  • M.M. Van Wanrooij, A.J. Van Opstal
    Relearning Sound Localization With a New Ear.
    Journal of Neuroscience, 25(22): 5413-5424 (2005).


  • Back to top, to John's Homepage, or to our Biophysics research projects.











    5. SACCADE-BLINK INTERACTIONS: BEHAVIOR AND NEURAL RESPONSES IN MONKEY SUPERIOR COLLICULUS.
    Research group: dr. A.J. Van Opstal and dr. H.H.L.M. Goossens

    Introduction
    The role of the midbrain Superior Colliculus (SC) in the control of saccadic eye movements is still controversial. According to one hypothesis the SC is part of the local feedback loop that controls the trajectory and kinematics of the saccade, either by a shifting wave of activity that progresses from the caudal SC to the rostral fixation zone, or by a decrease in the activity at the locus of maximum activity that follows the decrease in dynamic motor error. Others put the SC outside the feedback loop, and propose that its signal is open loop. Tests of these hypotheses have so far been inconclusive as saccades are typically highly stereotyped, and stimulation of omnipause neurons or rostral SC stop saccades in midflight but never lead to substantial changes in the trajectories. In this project we have perturbed saccades by precisely timed blinks (airpuff induced rapid eye-eyelid responses) and recorded both the oculomotor and eyelid behavior as well as the single-unit responses.

    Report:
    Blinks induce a whole repertoire of perturbations: saccade trajectories become highly variable, saccade velocity is strongly reduced, and saccade duration increases almost three-fold. Yet, the endpoints of the eye movements remain as accurate as the non-perturbed control saccades. Moreover, the mean firing rate of the SC neurons (both burst neurons and build-up neurons) is strongly reduced, while burst duration is increased. Interestingly, burst duration roughly matches eye-movement duration, and even more remarkably, the total number of spikes in the burst is hardly affected by the strong perturbations in the responses. These results have recently been published in two papers in the Journal of Neurophysiology.

    Publications:

  • H.H.L.M. Goossens, A.J. Van Opstal
    Blink-perturbed saccades in monkey. I. Behavioral analysis.
    Journal of Neurophysiology, 83, 3411-3429, 2000.
  • H.H.L.M. Goossens, A.J. Van Opstal,
    Blink-perturbed saccades in monkey. II. Superior Colliculus activity.
    Journal of Neurophysiology, 83, 3430-3452, 2000.

    Back to top, to John's Homepage, or to our Biophysics research projects.











    6. A DYNAMIC ENSEMBLE-CODING MODEL OF THE SUPERIOR COLLICULUS UNDERLYING SACCADES.
    Research group: dr. A.J. Van Opstal, dr. H.H.L.M. Goossens.

    Summary

    The deeper layers of the midbrain superior colliculus (SC) contain a topographic motor map in which a localized population of cells is recruited for each saccade, but how the brainstem decodes the dynamic SC output is still unclear. The reason for the current controversy is that most studies have analyzed SC responses to stereotyped saccades with little variability in trajectories and kinematics. In an attempt to resolve this issue, we have now analyzed the responses of 139 saccade-related neurons in teh SC to test a new dynamic ensemble-coding model, which proposes that each spike from each SC neuron adds a site-specific contribution to the intended eye movement command.
    As predicted by this simple theory, we have found that the cumulative number of spikes in the burst is tightly related to the ideal straight eye-displacement trajectory, both for normal saccades and for slow, strongly curved saccades. This dynamic relation depends systematically on the metrics of the saccade vector, and can be fully predicted from a quantitative description of the cell's classical movement field, by introducing the concept of the dynamic movement field.
    We further show that a linear feedback model of the brainstem, which applies dynamic vector summation to measured SC firing patterns, followed by linear decomposition of the vector into two independent linear feedback loops, produces realistic saccade trajectories and kinematics (see figure, top).
    Finally, the model also explains the specific pattern of changes in saccade metrics and kinematics after a small, localized lesion in the SC (Lee et al., Nature, 1988; figure, below).
    We conclude that the SC acts as a nonlinear, vectorial saccade generator that upon the presentation of a single peripheral target programs an optimal, straight eye-movement trajectory. We also propose that the SC is positioned upstream from the local feedback loop.
    We are currently extending our model to coordinated eye-head movements to visual and auditory targets, and to the programming of double-step responses.
    (for a competing theory on the SC see the website of my colleague
    Doug Munoz).

    Dynamic ensemble coding model of the Superior Colliculus
    (A) Ensemble-coding model of the saccadic system, driven by 136 actual cell recordings. (B-D) Green traces are the predicted, reconstructed saccades of the model; blue traces are the measured responses. Note that the site of activation (red activity profiles in (B)) stays at a fixed location in the SC motor map. Model saccades are straight (C) and have normal velocity profiles (D); the model predicts the nonlinear main sequence quite well (E).
    (B,C) Results of a simulation with the model before (blue) and after (green) a small localized lesion in the SC (represented by the hole in (A)). Note that the saccade vectors are directed away from the lesion ((B); at the circle). Thus, saccades directed to a location corresponding with the center of the lesion have a normal amplitude and direction; saccades to a target closer than the lesion center are too small, but saccades to targets beyond the lesion center are too large. Also the saccade directions point away from the lesion.
    (C) All saccades in and near the lesioned area also have substantially lower peak velocities.
    These results correspond nicely to the experimental data from Lee, Rohrer and Sparks, published in Nature in 1988. Yet, our model does not employ a vector averaging scheme, but linear vector addition of individual cell contributions, in combination with a fixed number-of-spikes criterion (represented by the 'gate' in (A), which closes the omnipausse switch as soon as the total number of spikes exceeds the threshold).

    Publications:
  • HHLM Goossens and AJ Van Opstal
    "Dynamic ensemble coding of saccades in monkey superior colliculus"
    Journal of Neurophysiology, 95: 2326-2341, 2006
  • AJ Van Opstal and HHLM Goossens,
    "Linear ensemble coding in midbrain Superior Colliculus specifies the saccade kinematics"
    Biological Cybernetics, 98: 561-577, 2008

    Back to top, to John's Homepage, or to our Biophysics research projects.











    7. INVOLVEMENT OF MONKEY INFERIOR COLLICULUS IN THE ENCODING OF COMPLEX SOUNDS.
    Research group: dr. A.J. Van Opstal, dr. H. Versnel and drs. M.P. Zwiers.

    Introduction
    Little is known about the acoustic encoding properties of neurons in the Inferior Colliculus (IC) of the awake monkey. In this project we aim to characterize in a quantitative way the spectral-temporal receptive fields (STRF) of IC neurons to a variety of auditory stimuli. Rippled noise stimuli (both static and dynamic) are used to reconstruct the STRF of each neuron. The neurons are also tested with broadband white noise, pure tones at various intensities, and by a number of 'natural' sounds (bird songs, monkey calls, a.o.). The STRF is used to predict the responses to these different stimuli.

    Report:
    About 92 neurons have been recorded (two monkeys, three IC's) to allow for a complete characterization of the STRF. Neurons heve been classified according to their SRTF properties.

    Publications:

  • H. Versnel, M.P. Zwiers and A.J. Van Opstal
    Spectro-Temporal Response Fields in the Inferior Colliculus of the Awake Monkey.
    J Rev de Acust. 33: 84-877985-06-8, 2002.
  • H. Versnel, M.P. Zwiers and A.J. Van Opstal
    Spectrotemporal response properties of Inferior Colliculus neurons in alert monkey.
    Journal of Neuroscience, 29: 9725-9739, 2009

    Back to top, to John's Homepage, or to our Biophysics research projects.











    8. INVOLVEMENT OF MONKEY INFERIOR COLLICULUS IN THE ENCODING OF SOUND LOCATION.
    Research group: dr. A.J. Van Opstal, drs. M.P. Zwiers and dr. H. Versnel.

    Introduction
    It has been hypothesized that the IC plays an important role in the encoding of sound location, although data from awake monkey are scarce. It has recently been suggested that neurons in the IC are also sensitive to changes in eye position, but so far only a limited number of positions (horizontal) were tested, sound intensities were not varied, and the acoustic properties of the cells were not quantified. Therefore, results could potentially have been confounded by \sensitivity of IC neurons to sound level, rather than to sound location.
    In this project, two monkeys were engaged in a visual fixation task, while sounds were presented at various locations in 2D frontal space. We also determined a cell's frequency tuning curve, and its sensitivity to sound level. In the location experiments, sounds could have one of three different intensities. To dissociate sensitivity to sound-source azimuth from sound level we corrected for the acoustic head shadow and the cell's frequency tuning curve, and performed a multiple linear regression analysis (firing rate as function of a cell's 'perceived' (i.e. corrected) intensity, azimuth and elevation. Different combinations of eye-re.-head and target-re.- head and target-re.-eye were employed.

    Report:
    In the large majority of cells (N=94) we recorded a consistent effect of changes in sound-source azimuth, as well as in sound level. A significant proportion of the cells was also tuned to sound elevation, albeit in a more complex way than the monotonic tuning obtained for azimuth. About 29% of the cells were weakly modulated by 2D eye position, that could be described by an eye-position gain field in two dimensions (FR = g.Eh+h.Ev). We have trained a neural network model in which a population of IC cells with the properties described above is capable of encoding both the coordinates of the sound in head-centered and in eye-centered reference frames (figure below). The results are described in a paper that has recently appeared in the Journal of Neuroscience.
    We will also assess whether there is a relation with the properties of the SRTF as determined in the paradigms of
    project 7.

    Publications:

  • M.P. Zwiers, H. Versnel and A.J. Van Opstal
    Involvement of Monkey Inferior Colliculus in Spatial Hearing.
    Journal of Neuroscience, 24(17): 4145-4156 (2004).
    Back to top, to John's Homepage, or to our Biophysics research projects.

    Model of auditory-evoked orienting
    (a) Scheme underlying the auditory-evoked orienting response of eyes and head. A broad-band sound (flat spectrum, left) is filtered by the two ears (through the HRTFs, left (L) and right (R)), whereafter the tonotopic neural stages in the brainstem project to monaural and binaural cells in the left and right Inferior Colliculus (IC). Cells respond monotonically to level (absolute level for monaural cells, level difference (ILD) for binaural cells) within their frequency tuning, and are modulated by eye position. The IC transmits its output to the deep layers of the Superior Colliculus (SC), in which a gaussian population of recruited cells encodes the gaze motor error signal. Inset left: vectorial scheme showing the required transformation of head-centered acoustic input (H) into eye-centered gaze motor error (M) by incorporating eye position (E).
    (b) Properties of an IC cell from the model. Best frequency is 7 kHz. The cell responds to sound level (abcissa) in a monotonic way, and on top of that, on changes in eye position (cf. left vs. right, different eye positions, in deg).
    (c) The model consists of a neural network implementation of the scheme in (a). The IC contains 72 cells with randomly selected tuning parameters, like in (b). Output of the model is the localized Gaussian activity pattern in the motor SC. Training was done with the Widrow-Hoff delta rule. The figure shows a simulation of the model's output according to the inset in (a). Error is about 27 spikes (absolute error across the SC map, 10x10 matrix); correlation between model output and required output is 0.96, which is a typical result.











    9. INFLUENCE OF SOUND INTENSITY AND SOUND DURATION ON HUMAN SOUND LOCALIZATION.
    Research group: dr. A.J. Van Opstal, dr. P.M. Hofman and drs. J. Vliegen.

    Introduction
    We have recently shown that sound duration has a systematic effect on the gain of the elevation components of human sound localization responses (Hofman and Van Opstal, 1998). This effect has recently been challenged by Macpherson and Middlebrooks (JASA, 2000), who hypothesized that the effect is due to a cochlear nonlinearity (saturation), rather than to a neural integration stage. If so, one expects that systematic variation of sound duration and intensity would yield the lowest slopes (and highest variability) for stimuli with the highest intensities and shortest durations. To that end we systematically varied both sound intensity and sound duration (16 different stimuli, all randomly interleaved), while subjects made rapid head movements to targets in the 2D space (75 deg in all directions).

    Report:
    We find that stimulus duration systematically modulates the gain of the elevation responses for all intensities applied. In contrast to the prediction of the cochlear theory, the effect of sound intensity is reversed: lowest intensities yield the lowest slopes ('[positive level effect'), while at higher intensities the slope decreases, but not only for the short-duration stimuli. Thus, intensity and duration both have a comparable influence on the elevation gain. These results can not be explained by a peripheral saturation mechanism. We propose an extended version of the Hofman and Van Opstal model, which accounts for the intensity-related effects on elevation gain. Also an explanation is offered for the considerable day-to-day variability of the quantitative measures (most notably the gain), which requires data to be collected within the same experimental session, rather than in separate blocks on different days. The results have recently been published in JASA (Vliegen and Van Opstal, 2004).

    Publications:

  • P.M. Hofman and A.J. Van Opstal
    Spectro-temporal factors in two-dimensional human sound localization.
    Journal of the Acoustical Society of America 103: 2634-2648, 1998.
  • J. Vliegen and A.J. Van Opstal
    Influence of duration and level on human sound localization.
    JASA 115: 1705-1713 (2004)


    Influence of sound duration and ~intensity on human sound localization
    Influence of sound duration and sound level on the stimulus-response relation (here quantified by the slope ('gain') of the linear regression lines) of sound-localization responses for subject FF. Stimuli were broad-band noise, either from one of 16 different durations/levels, all presented randomly interleaved in one experimental session (about 950 trials).
    Azimuth response components of the head movements (open circles) are very robust, as regression lines are stable for nearly all stimulus conditions (only at the weakest and shortest stimulus, there is more variability in the responses).
    Elevation responses (filled triangles) are systematically affected by both sound duration (rows) and sound level (columns). For all sound levels, the elevation gain increases with sound duration. For the shortest durations, however, the slope is also clearly affected by sound level. Thus, both acoustic factors contribute to the elevation response.
    Back to top, to John's Homepage, or to our Biophysics research projects.











    10. THE ENCODING OF SPECTRAL SHAPE CUES IN THE HUMAN SOUND LOCALIZATION SYSTEM.
    Research group: dr. A.J. Van Opstal and dr. P.M. Hofman.

    Introduction
    The perception of sound elevation relies on the spectral shape cues from the pinna. In this project we ask two questions: (a) which spectral features are responsible for a given elevation percept?, and (b) how do the spectral representations of the two ears combine to form one elevation percept?
    The first question is studied by employing a new psychophysical method, in which eye movement responses are measured to randomly-shaped sound spectra emanating from a fixed speaker at the straight-ahead location. Using Bayes' theorem allows for a reconstruction of the relevant spectral features.
    The second question is investigated by studying eye-movement responses under four different hearing conditions: both ears free, both ears with a mold, and either ear with a mold.

    Report:
    Applying Bayes' rule leads to an independent reconstruction of the perceptually relevant spectral shape cues with a minimum of a priori assumptions. Interestingly, the spectral featues have a good resemblance to the measured acoustics of the head-related transfer functions (Hofman and Van Opstal, Biol. Cybernet.).
    The monaural-binaural mold experiments show that both ears contribute to an elevation percept. The relative weight of the two ears changes gradually from maximum (to the far-ipsilateral side), to zero (on the far contralateral side), with a value in between these two extremes near the midline. Two conceptual models are discussed to explain these findings (Hofman and Van Opstal, Exp. Brain Res., 2003).

    Publications:

  • P.M. Hofman and A.J. Van Opstal
    Bayesian reconstruction of sound localization cues from responses to random spectra.
    Biological Cybernetics, 86: 305-316, 2002.
  • P.M. Hofman and A.J. Van Opstal
    Binaural weighting of pinna cues in human sound localization.
    Experimental Brain Research,148: 458-470, 2003.

    Back to top, to John's Homepage, or to our Biophysics research projects.











    11. AUDITORY-VISUAL INTERACTIONS UNDERLYING SACCADE GENERATION IN A COMPLEX SCENE.
    Research group: dr. A.J. Van Opstal, drs. M. Van Wanrooij, Prof. D.P. Munoz (Queen's Univ., Kingston, Ontario), dr. B.D. Corneil (Caltech, Pasadena, Ca.

    Introduction
    The majority of studies on auditory-visual integration have been performed under highly simplified conditions in which both the number of potential targets and the surroundings were limited. In this study, part of our Human Frontiers grant, we have studies saccadic eye movements generated in a complex auditory-visual background, to either a visual, an auditory, or an auditory-visual target. In the latter case, the stimuli were always spatially aligned, thus avoiding any task ambiguities. Both the relative intensity and the timing of the target sound re. to the background and the visual stimulus, respectively, were varied in a systematic way.

    Report:
    The results show that the strongest benefit of auditory-visual integration is obtained for the lowest sound intensities, provided the target sound is presented after the visual stimulus. All 12 auditory-visual conditions provided evidence for auditory-visual integration, as no data points coincided with the unimodal results. These experiments provide for the first time a behavioral correlate for the phenomenon of inverse effectiveness, as has been reported for neural responses in the superior colliculus of the anaethetized cat. Strongest response enhancements are obtained for the weakest stimuli.
    We are currently studying the effect of systematic spatial separation of the stimuli under the same conditions.

    Publications:

  • B.D. Corneil, M. Van Wanrooij, D.P. Munoz and A.J. Van Opstal
    Auditory-visual interactions subserving goal-directed saccades
    in a complex scene.
    Journal of Neurophysiology, 88:438-454, 2001.
  • A.J. Van Opstal and D.P. Munoz
    Auditory-visual interactions subserving primate gaze orienting.
    The Handbook of Multisensory Processes, (Calvert, Spence, Stein, eds.), MIT Press, Cambridge, MA, pp. 373-393, 2004.
  • M.A. Frens and A.J. Van Opstal
    Visual-auditory interactions modulate saccade-related activity in monkey superior colliculus.
    Brain Research Bulletin, 46, 211-224, 1998
    Saccades in a noisy multisensory environment
    (A) Saccade scan patterns to a visual-only (V), auditory-only (A) and audiovisual (AV) target, which was hidden in a complex audio-visual scene (open dots: green LEDs; peripheral stars: background speakers). The V- and A-scans consist of multiple saccades before finally landing on the target, while the AV response reaches the target with one saccade. The auditory target was a broad-band 'buzzer', with an intensity 21 dB below the 60 dB background noise.
    (B) Cumulative distributions of the percentage of responses that reached the target, as function of time after stimulus onset (data from subject BC). Clearly, the AV saccades in which the stimuli were presented simultaneously (AV), or the visual stimulus led auditory by 100 ms (V100A), were more successful than responses to unimodal stimuli, or to the condition in which the visual stimulus lagged the auditory by 100 ms (A100V) (from: VO&M-2004)

    Back to top, to John's Homepage, or to our Biophysics research projects.











    12. INFLUENCE OF HEAD POSITION ON HUMAN SOUND LOCALIZATION.
    Research group: dr. A.J. Van Opstal and dr. H.H.L.M. Goossens.

    Introduction
    In this project we study the frame of reference in which sounds are being represented, as well as the coordination strategy used by the eye-head gaze orienting system.
    In particular, we wondered whether the auditory system accounts for intervening changes in eye and head orientation when programming a goal-directed gaze shift toward a sound. Two paradigms were used to study this question: in the double-step paradigm, subjects made eye-head movements to first a visual target, followed by a brief noise burst. Both stimuli were already extinguished before eyes and head started to move. In the static head-position paradigm, subjects made eye movements to tones of different frequencies, while their head was pitched at different angles re. horizontal plane.

    Report:
    The data show that intervening eye- and head movements are fully accounted for in programming an eye-head gaze shift to an auditory target. The static pitch experiments show that the fixed elevation perceived for pure tones is neither head-fixed, nor space fixed, but is modulated by head position with a gain between zero and minus one. This gain depends on sound frequency, suggesting that head orientation is already influencing the programming of sound location within the tonotopic representations of the auditory system (Goossens and Van Opstal, 1999).
    The eye-head control system is capable of generating goal-directed movements of both the eyes and the head, even when the two motor systems are not aligned. The experiments indicate that the common gaze displacement signal, issued by the superior colliculus, is decomposed into separate motor error signals for eyes and head at a downstream stage (Goossens and Van Opstal, 1997).
    (See also
    project 14 for a follow-up study on this work).
    Publications:

  • H.H.L.M. Goossens and A.J. Van Opstal
    Human eye-head coordination in two dimensions under different sensorimotor conditions.
    Experimental Brain Research, 114: 542-560, 1997
  • H.H.L.M. Goossens and A.J. Van Opstal
    Influence of head position on the spatial representation of acoustic targets.
    Journal of Neurophysiology, 81, 2720-2736, 1999.
    Influence of static head position on pure-tone localization.
    Eye position responses relative to the head are shown as function of head orientation relative to space (or body) for four different sound frequencies (1.0-7.5 kHz). Measured slopes are all negative, but most differ from -1.0 (which would mean full compensation for head orientation, i.e. target would be encoded in world coordinates). Note clear dependence of the slope on sound frequency: localization of the 2000 Hz tone is nearly independent of head orientation, suggesting that it remains in a head-centered code; the 1000 Hz tone responses follow a slope very close to -1.0. Most tones are represented neither in head-centered nor in world coordinates. Responses to Gaussian white noise follow a slope very close to -1.0 (not shown).

    Back to top, to John's Homepage, to Job openings, or to our Biophysics research projects.











    13. THE GENERATION OF RAPID DOUBLE-STEP SACCADES: DISPLACEMENT VS. POSITION INTEGRATOR?
    Research group: dr. A.J. Van Opstal and dr. H.H.L.M. Goossens.

    Introduction
    It has recently been shown that electrical microstimulation in the deep layers of the superior collicus (SC) immediately following a saccade yields saccade displacement vectors with amplitudes and directions depending on the interval between primary saccade offset and stimulation onset. It was hypothesized that these data support the notion of a resettable integrator in the local feedback loop with a time constant of about 50 ms. If true, it would not be possible to generate accurate double-step saccades at intervals shorter than 50 ms. We studied this point behaviorally (human and monkey), and electrophysiologically by recording and stimulating in the deep layers of the SC.

    Report:
    Both humans and monkeys are able to generate accurate double-step saccades at intervals below 20 ms (paper J.Neurophysiol.). The electrophysiological data indicate that the saccade-related burst in the SC of the second saccade is faithfully predicted by the cell's movement field, irrespective of the intersaccadic interval. Moreover, the microstimulation results indicate that: (a) the effect remains when a visual target is presented along with the electrical stimulation (hence, it is irrelevant whether or not the monkey is programming double-step saccades), and (b) the time constant of the data depends heavily on the direction of the primary saccade and the site of stimulation. These findings are at odds with the hypothesis of a resettable integrator acting downstream from the SC. We reconcile these data by incorporating a role for the fastigial nuclei in saccade generation.

    Publications:

  • H.H.L.M. Goossens and A.J. Van Opstal
    Local feedback signals are not distorted by prior eye movements: evidence from visually-evoked double-saccades.
    Journal of Neurophysiology, 78, 533-538, 1997
  • H.H.L.M. Goossens and A.J. Van Opstal
    Involvement of monkey superior colliculus in the generation of double-step saccades.
    Experimental Brain Research, in preparation.

    Back to top, to John's Homepage, or to our Biophysics research projects.











    14. DYNAMIC AUDIO-VISUAL LOCALIZATION DURING RAPID EYE-HEAD GAZE SHIFTS.
    Research group: dr. A.J. Van Opstal, drs. J. Vliegen and T. van Grootel.

    Introduction
    We have recently shown
    (in project 12) that the auditory system is able to compensate for an intervening eye-head gaze shift that ocurred after the presentation of brief visual and auditory stimuli in a classic (static) double-step paradigm. Although these results support the hypothesis that the auditory system represents targets in a spatial~ (or body-), rather than in a head-centered reference frame, there is an important alternative explanation for the results that cannot be ruled out: the system could have updated the head-centered target location by applying prior knowledge about the upcoming gaze displacement ('predictive remapping', a hypothesis that has been forwarded in the visuomotor literature, and for which compelling neurophysiological evidence has been provided).
    In this project we test the different models by presenting either an auditory stimulus (project 1) or a visual stimulus (project 2) during the rapid eye-head gaze shift toward the initial visual target. This dynamic condition poses the gaze control system for two additional challenges as opposed to the static condition: first, since the target is presented in mid-flight, the system is denied any prior information about its location, and of the subsequent intervening movement. Second, due to the rapid and variable head movement, the acoustic cues vary continuously during target presentation. For visual stimuli, the target is swept across the retina at a high and variable speed as a result of the eye movement. This denies the auditory system of stable head-centered acoustic input, and the visual system of a stable eye-centered visual input. If under these conditions the locazation responses remain accurate, the predictive remapping hypothesis is unable to account for that.
    Report:
    Experiments were performed with nine (project 1) and eight (project 2) human subjects. The second target was presented either early during the first gaze shift, or near the peak head- or eye velocity (late-triggered condition). The results show that the sound-localization responses for a simple single-target trial, the static double-step trial, and for the (early and late) dynamic double-step trials are equally accurate. These results therefore show that the auditory system does not use predictive remapping to program a gaze shift toward sounds under these dynamic conditions. We obtained similar results for the visual-visual condition (see figure below).
    Instead, our data are best explained by a mechanism that represents the target in a spatial reference frame (in body-centered, or even world coordinates). Interestingly, for all trial types (single-step, static double-step, dynamic double-step) the elevation (not the azimuth) components of the localization responses were more accurate for the 50 ms duration noise bursts, than for 3 ms noise bursts, which shows that an ongoing integrative process that extracts sound-source elevation from the spectral cues already accounts for the dynamic changes in head orientation.
    Two research papers have now been accepted for publication; one in the Journal of Neuroscience (August, 2004), and one in the Journal of Neurophysiology (August, 2005).

    Publications:

  • J. Vliegen, T.J. Van Grootel and A.J. Van Opstal
    Dynamic sound localization during rapid eye-head gaze shifts.
    Journal of Neuroscience, 24 (42): 9291-9302 (2004)
  • J. Vliegen, T.J. Van Grootel and A.J. Van Opstal
    Gaze orienting in dynamic visual double steps.
    Journal of Neurophysiology, in press, 2005
    Dynamic eye-head coordination
    Prediction of three different models to explain double-step behavior for rapid eye-head gaze shifts. The graphs show the predicted horizontal and vertical displacements of the eye-in-space ('gaze') for the second gaze shift in response to a sequence of two brief visual stimuli, plotted against the actually measured gaze shift.
    The visual-predictive model assumes that the gaze-control system accounts for the initial retinal error of the first visual stimulus, in order to allow for a prior, preprogrammed, estimate of the upcoming gaze shift to that target. The motor-predictive model is assumed to use an actual motor command ('efference copy') of the upcoming gaze shift, whereas the dynamic feedback model takes the actual gaze shift following target presentation into account. In the dynamic double-step, these three models can be dissociated. It is clear from this figure that the predictions for the dynamic feedback model correlate best with the actually measured gaze shifts.
    (A) data from paradigm in which the second target was triggered by the head movement; (B) second target was triggered by the eye movement.



    Back to top, to John's Homepage, or to our Biophysics research projects.











    15. ROLE OF MONKEY AUDITORY CORTEX IN THE LEARNING AND REPRESENTATION OF COMPLEX SOUNDS.
    Research group: dr. A.J. Van Opstal and dr. H. Versnel.

    Introduction
    In this project we attempt to study the responses of single neurons in primary auditory cortex (AI) and the surrounding belt area of awake and trained rhesus monkeys that perform in a complex sound detection task. The monkey has to detect a subtle change in the spectro-temporal properties of a complex sound, and reports his percept by quickly releasing a bar. Spectro-temporal receptive field properties of the cell under study (see also
    project 7) will be measured before and during task execution.
    In the experiments the stimulus change occurs at either of two moments after stimulus onset: at 500 (condition a) or at 1000 ms (condition b). These (a,b) trials are randomly interleaved.
    Five blocks of 110 trials are performed for each cell in which the probabilities for type-a vs. type-b is varied: [100%(a),0%(b)], [75%(a), 25%(b)], [50,50], [25,75], and [0 100].
    In a second experiment, the stimulus change consists of a weak tone in the neuron's spectro-temporal receptive field, that is added to a complex broad-band ripple. The monkey should pay careful attention to the particular frequency band in order to detect the onset of the weak tone. We will study whether the STRF of the cell will change as a result of these task requirements. We will also check whether correct and incorrect responses of the monkey can be predicted from the neuron under study.

    Report:
    So far, behavioral experiments have been performed with two rhesus monkeys. The first monkey is currently engaged in the AI recordings.
    Back to top, to John's Homepage, or to our Biophysics research projects.











    16. EFFECT OF BODY TILT ON NEURAL RESPONSES IN MONKEY INFERIOR COLLICULUS.
    Research group: dr. A.J. Van Opstal and Ph.D. student or Postdoctoral fellow.

    Introduction
    A recent behavioral study from our lab (project nr. 12) showed that static head orientation influences the localization responses to pure tones. The effect depends on sound frequency, suggesting that head position is taken into account within the tonotopic auditory system.
    In addition, recent evidence suggested that eye position might influence the responses of cells in the inferior colliculus. Although we could not replicate the latter finding, our own recordings indicate that cells in the IC may be involved in the spatial representation of sounds. In this project, we study whether these cells are sensitive to changes in static head orientation. To that end, a monkey will be trained to fixate targets (either head-fixed, or space-fixed), while sounds are presented (head-fixed or space-fixed) at different locations. The monkey will be tilted re. gravity while responses of IC cells are measured. We will also record from pairs of single units by employing a double-electrode setup.
    Two-axis vestibular primate chair
    Chair for making whole-body rotations around two independent (hrizontal and vertical) axes. The axes are controlled by Harmonic Drive systems.

    Back to top, to John's Homepage, to Job openings, or to our Biophysics research projects.