ORIENT ERC adv Grant nr. 693400

Project summary

Rapid object identification is crucial for our survival, but poses daunting challenges to the brain if many stimuli compete for attention, and multiple sensory and motor systems are involved to plan and generate an eye-head gaze-orienting response to a selected goal.

Research question: How do normal and sensory-impaired brains decide which signals to integrate ("goal"), or suppress ("distracter")?

Audiovisual (AV) integration only helps for spatially and temporally aligned stimuli. However, sensory inputs differ markedly in their reliability and precision, reference frames, and processing delays, yielding considerable spatial-temporal uncertainty. Moreover, vision and audition utilise coordinates that misalign whenever the eyes and head move, while their sensory acuities vary across space and time in essentially different ways. As a result, assessing AV alignment poses a number of major neuro-computational problems, which so far have been studied for the simplest stimulus-response conditions only.

In three integrated research projects my team studies and models bottom-up (sensory) and top-down (task-driven) gaze-control mechanisms of AV integration in complex environments:
  1. We impose unique eye-head gaze-orienting tasks to healthy controls and to patients with sensory, motor, or cognitive disorders, while systematically varying AV stimulus statistics. We will uncover how healthy and impaired systems construct Priors about the environment, and utilise dynamic active and passive self-motion feedback (1 PhDs/2 postdocs; Nijmegen).
  2. We challenge prevailing computational models of statistical inference in the brain by incorporating top-down control and fast eye-head sensorimotor feedback in cortical-midbrain neural network models, driven by our novel psychophysical, and our recent monkey neurophysiological data (1 postdoc; Nijmegen).
  3. As a critical test for our concepts and models, we apply our results to a novel autonomous humanoid eye-head robot that is equipped with foveal vision, realistic auditory inputs, three-dimensional nested motor systems, and rapid sensorimotor feedback and learning algorithms (1 PhD/1 postdoc/1 engineer; Lisbon).


Back to the ORIENT home page.


The Nijmegen team.

John van Opstal, coordinator
Jesse Heckman, PhD student, psychophysics
Annemiek Barsingerhorn, postdoc, psychophysics
Ad Snik, co-supervisor
Francesca Rocchi, postdoc, psychophysics
Arezoo Alizadeh, postdoc, computational modelling
Our experimental facilities and support staff
Gunter Windau, ICT
Stijn Martens, Mechanics
Ruurd Lof, physics, Electronics
Two-axis vestibular chair
Auditory setup ('the sphere')
fNRS - EEG lab



Back to the ORIENT home page.


The Lisbon team.

Alexandre Bernardino, co-PI
Jose Santos-Victor, Dept. Head
Our prototype Eye
Miguel Lucas, master student, robotics apr-oct, 2017
Alex and Carlos Aleluia, master student, robotics
Mariana Martins, master student, computer vision
       
Akhil John, PhD student, robotics
Eye with torsional pulley for Superior Oblique Omar Salah, postdoc, robotics, and Rui Cardoso, master student, control




Back to the ORIENT home page.


Subproject 1. Multisensory human gaze-orienting behavior in health and disease.

Background and relevance.
Human AV integration has typically been studied under head-fixed, stationary conditions in simple sensory environments. In this Subproject we will extend the current body of knowledge, by studying multisensory evoked orienting responses in cluttered AV environments for: (i) head-unrestrained active gaze shifts, with or without (ii) whole-body low-frequency passive rotation through 2-axis vestibular stimulation.
While active eye-head orienting allows use of all available sensory cues and efferent motor feedback signals (Figure B2-4), head-restrained passive vestibular rotation denies feedback from neck-proprioception and head-motor corollary discharges. These different movement conditions will assess the importance and contribution of motor-feedback signals to overt orienting behaviour in health and disease, in particular for challenging audiovisual search conditions.



The research problems outlined in Section (a) are studied in healthy control subjects and in different groups of patients. The philosophy of my experiments is that the advanced neuro-computational algorithms for multisensory evoked gaze shifts provide highly nontrivial challenges, in particular for the impaired brain. Further, by studying the behavioural strategies of patients with particular sensory impairments (visual, auditory, or vestibular) our Bayesian theory predicts specific, quantitative effects (regarding precision and accuracy) of reduced sensory resolution on AV integration and eye-head coordination, and will shed new light on the implementation of statistical processing in the brain.
I firmly argue that these insights will be of crucial importance for future improvements of sensory aids and behavioural therapies for these patients


Back to the ORIENT home page.


Subproject 2. Computational modelling

Beyond the state-of-the-art:
Our conceptual dynamic Bayesian model will be formulated into a neurobiologically realistic neuro-computational model of multisensory-evoked eye-head gaze control, guided by the results from our most recent monkey recordings in the midbrain, and our new human behavioural experiments. We will significantly extend our optimal control model of the midbrain Superior Colliculus (SC) (Figure 5), which explains how (eye-only) saccade-related activity of the neuronal population in the SC encodes the instantaneous trajectory and kinematics of saccadic eye movements by summing the instantaneous spike contribution of each neuron across the recruited population:



We recently discovered an important topography in the functional organisation of the SC, in which peak firing rates and burst durations of recruited cells vary systematically with their location in the SC motor map: rostral cells (associated with small saccades) have higher firing rates for their optimal saccade (~800 spks/s), than caudal cells (large saccades; ~300 spks/s). Our theoretical analysis revealed that this property underlies the well-documented nonlinear, saturating relation between saccade amplitude and peak eye velocity. Interestingly, this saccade "main sequence" reflects an optimal control strategy that implements a speed-accuracy trade off in the primate oculomotor system. It accounts for two conflicting constraints in the system in the best possible way: (i) to generate a precise eye movement as fast as possible, (ii) despite poor spatial resolution (high uncertainty) in the peripheral retina.
We have thus identified the first neurophysiological correlate of an optimal control mechanism in the brain!
Our model is driven by recorded spike trains, and predicts the full kinematic repertoire of eye saccades in all directions and amplitudes with a remarkable simplicity of only two free parameters (gain B and feedback delay in Fig. 5).

My ERC project:
The model of Figure 5 is far from complete, and will form the starting point for a complete 3D sensorimotor scheme that includes eye-head coordination, and local excitation and global inhibition within an intrinsic SC network, such that it explains, rather than uses, the recorded bursting activity profiles of the SC cells.

So far (June, 2019), we have made considerable progress in our modelling efforts, by incorporating a full computational model with spiking neurons, and an extension of the computational model of Figure 5 to eye-head coordination (see our publications, Kasap and Van Opstal, 2018a,b,c, 2019, and Van Opstal and Kasap, 2018, 2019).


Back to the ORIENT home page.


Subproject 3. Humanoid Eye-Head Audio-Visual Robotic System

Motivation
The real critical test for the validity of our models is in a realistic hardware implementation that has to flexibly cope with similar fundamental target selection, acquisition and following problems as human subjects (see Cully and Clune, Nature 521: 503-507, 2015, for recent strong support for this argument). In my third subproject my collaborating team in from the Instituto Tecnico Superior in Lisbon will design and test a novel humanoid eye-head robotic system that is guided by the same principles as uncovered from our monkey recordings (Fig. 5), our human psychophysics (Fig. B4-2), and our modelling.
The test is critical because (i) a robot will face truly unexpected situations in the environment and in its own dynamics, which will be unforeseen in necessarily restricted computer-only simulations. (ii) Its physical dynamics can be quite complex, and a-priori unknown because of (unknown) internal delays in its circuitry, nonlinear interactions and complex dependencies between different segments, etc. These aspects are at best simplified in computational modelling. (iii) The robot will have to autonomously discover, and learn through reinforcement at minimising "costs", its own dynamics and optimal, yet flexible, behaviours through exploration of, and interactions with the environment, relying on the general principles uncovered in my Subprojects 1 and 2. An important challenge to be solved in this Subproject will be to define the set of Cost Functions (e.g., speed-accuracy trade off; energy expenditure; movement duration; task constraints, trajectory formation, etc.) under which the control is optimised just as in humans.

So far, the gaze-control behaviours of existing robots are not guided by the principles of the human brain. For example, the eyes are typically simplified to cameras with homogeneous fields of view, and equipped with relatively simple Winner-Take-All computations to find an instructed target. Similar limitations apply to the robot's ears. However, our studies, and those of others, have made clear that the presence of a fovea in primate eyes (predator!), is crucial to understand the efficient, rapid, and precise search of targets through fast saccadic eye-head gaze shifts with optimal kinematics, and limited (visual, serial) processing time. Further, adequate sound localisation (on the basis of different, but complementary acoustic cues) is needed to extend the range of the limited visual resolution to far beyond the fovea and visual field, and allow for efficient Audio-Visual integration to make the total system better than its parts. For these reasons we will construct, in close collaboration with our Lisbon colleagues, our own novel robot system. Figures 8 and 9 illustrate the basic mechanics of our planned eye-head robot.

Fig. 8 Fig. 9

Figure 8: (a) Schematic of our robotic humanoid eye, consisting of three antagonistic muscle pairs, each connected to a single rotational motor that is driven by the neural network models of Subproject 2. Motors and muscles are drawn displaced re. eye for illustrative purposes only. For example the medial and lateral recti (MR, LR) are both connected to a single motor with a vertical rotation axis. Motors can rotate bi-directionally, at a velocity that is specified by our SC model (Fig. 5). The LR and MR will thus be appropriately lengthened or shortened to generate a saccade. Hor.-Vert. rotations will be confined to about 40 deg from straight ahead, Torsion to about 15 deg, to impose a realistic oculomotor range that necessitates early use of head movements for peripheral goals. The three motors together implement the 3D kinematical principles described by Listing's and Donders' laws. We investigate whether muscles should also be equipped with strain gauges to provide proprioceptive feedback signals. (b) The visual input is filtered such that it has high resolution only around the center ('foveal vision').
Figure 9: a) Robot implementation of the humanoid head that will generate kinematically correct 3D head rotations for natural gaze shifts that follow Donders' law. The three nested axes have different attachment origins, like in the human head. The eyes are illustrated on the left. (b) The robot's ears are 3D replica's of a real human ear, producing realistic elevation-dependent spectral shape cues (HRTFs).




Back to the ORIENT home page.


Progress reports (d.d. July, 2019)


Months 1-18


The project started in January 2017, setting up the collaboration with the Visual Lab of prof Alexandre Bernardino of the Robotics Institute at the Instituto Tecnico Superior in Lisbon as third Linked Party. The project coordinator (JvO) paid several visits to the Lisbon group, to jointly draft a formal Memorandum of Understanding, in which the mutual parties both agreed on the terms for the collaboration. The MoU was formally signed by the Institute Deans of both Universities (Nijmegen and Lisbon) in Sept. 2017.

The two-axis vestibular chair at the Faculty of Science (part of Radboud Research Facilities, RRF) became available for testing by the end of the spring of 2017. During the first few months of the project (Feb-May, 2017), the PI appointed Bahadir Kasap (PhD student), to work on a model of the oculomotor midbrain by implementing a novel spiking neural network algorithm (implementation of Subproject 2). At the moment of writing this report (August 2018), five manuscripts have arisen from this work (two publications appeared: in the Journal of Neurophysiology (2018), and in Neurocomputing (2018); three papers have been submitted: to Frontiers in Applied Mathematics and Statistics, to PLoS Computational Biology, and to Progress in Brain Research).

In the meantime, the PI initiated a search for the positions of the PhD and postdoc for Subproject 1 (human psychophysics). Two excellent candidates have been appointed in the fall of 2017: Jesse Heckman (PhD1, per Sept 2017), and Annemiek Barsingerhorn (Postdoc 1, per Oct 2017). We were lucky to appoint prof Ad Snik, who recently retired from the Otolaryngology Dept. of the Radboud Umc, per Sept 1, 2017 as co-supervisor of the patient psychophysics. Prof Snik is a world-recognised expert on auditory technology and audiology, and his appointment therefore fits perfectly in the Action's aims concerning our work with sensory patients.

We have now initiated a search for a second postdoc to pursue the work on modelling the gaze control system with spiking neuronal networks (Subproject 2), and we will soon start looking for a third postdoc to strengthen the team of Subproject 1.

The collaboration with prof Bernardino is well on its way. Between April - Oct 2017 a master student (Miguel Ruiz Lucas) designed and tested a prototype robotic eye (resulting in his Master Thesis report, and his graduation in Oct 2017). As of April 2018, two Lisbon master students are continuing the work to improve and extend on this prototype (Carlos Aleluia, who works on the 3D kinematics and on mechanical improvements, and Mariana Martins, who focuses on visual-image processing and stabilisation of the systems positioning). We have recently recruited a PhD student (Akhil John is appointed per Sept. 2018) and are currently hiring a Postdoc (Omar Salah, to be appointed in the fall of 2019). Because of the successful collaboration, the PI has prepared an amendment to the Grant Agreement to change the status of the Lisbon group into that of co-Beneficiary, so that the collaboration will include formal scientific work also from the Lisbon group.

Months 1-30 (midterm)


The project started in January 2017, setting up the collaboration with the Visual Lab of prof Alexandre Bernardino of the Robotics Institute at the Instituto Tecnico Superior in Lisbon, and starting the research by hiring the first applicants and interns. The Memorandum of Understanding was formally signed by the Institute Deans of both Universities (Nijmegen and Lisbon) in Sept. 2017. So far, 22 research papers have already resulted from the work in this Action: 15 papers from Subproject 1, 6 papers from Subproject 2, and 1 paper from Subproject 3. Overall, the project can be considered a great success.

Subproject 1: Human multisensory gaze control in complex environments: psychophysics.

The multisensory two-axis vestibular chair at the Faculty of Science (Radboud Research Facilities), which is the central experimental facility for this Action, became fully available for the psychophysical experiments of Subproject 1 in June 2018 (for video, see:
Chair). In Sept. 2017, J Heckman (PhD 1), and per Oct. 2018, A Barsingerhorn (Postdoc1), were appointed. Experiments on neural mechanisms underlying sound-localisation in noisy environments, and Bayesian mechanisms underlying audiovisual integration were published in 2017/2018/2019: Van Opstal et al., 2017; Bremen et al., 2018; Van Bentum et al,., 2017; Ege et al., 2018a,b; 2019; Zonooz et al.,2018a,b. The PI has held several presentations on this work in international conferences and invited seminars, e.g. at the NCM meetings in Santa Fe, NM, USA, and in Toyama, Japan; in Rovereto, Italy; in Kosice, Slovakia. PhD1 presented his work at the Gordon Conference on eye movements in Lewiston, MN, USA. Postdoc 1 presents her current results at the European Conference on eye movements in Alicante.
We hired prof A Snik per Sept 1, 2017 to work on sensory-deprived patients in collaboration with our applicants. He is a world-recognized expert on auditory technology and audiology, and his appointment fits perfectly in the Action’s aims. In Jan 2019, we attracted F Rocchi (Italy) as Postdoc2 to ORIENT to work on audio-visual psychophysics and plasticity/adaptation. To set up the auditory patient work under the supervision of Prof Snik, we hired two young PhD researchers for a period of 6 months (Jan-June, 2019): S Sharma, and S Ausili. They performed sound-localization studies in our lab with hearing-impaired patients, equipped with a cochlear implant (either unilateral, or bilateral) and a hearing aid (so-called bimodal electro-acoustic hearing). So far, five publications have appeared from this work (Snik et al., 2019; Huinck et al., 2019; Vogt et al., 2018; Sharma et al., 2019; Ausili et al., 2019), and about 6 more papers are expected to follow this year.

Subproject 2: Computational modelling of the eye-head gaze-control system.
During the first months of the Action (Feb-May, 2017), the PI appointed B Kasap (PhD student), to work on a computational model of the midbrain by implementing a novel spiking neural network algorithm. Six manuscripts emerged from this work (in the J Neurophysiology (2018), in Neurocomputing (2018); in Front Appl Math and Stat (2018), and to PLoS Comput Biol (2019), and two papers in press in Progr Brain Res). In April 2019, the PI appointed Postdoc3 on this Subproject, dr A Alizadeh (Iran), who will extend the current spiking network model to 3D eye-head coordination.

Subproject 3: Humanoid robotic model of the eye-head gaze-control system.

The collaboration with prof Bernardino on Subproject 3 goes very well. In 2018, the PI submitted an amendment to the Grant Agreement to make the Lisbon group co-Beneficiary, so that the collaboration can now include formal scientific work from the Lisbon group, with its own alotted budget.
Between April and Oct 2017 a master student (Miguel Ruiz Lucas) had designed and tested a prototype robotic eye (resulting in a Master Thesis report, and his graduation in Oct 2017). From April 2018 to Oct. 2019, the work continued with two new students: Carlos Aleluia, who works on the 3D kinematics and on mechanical improvements, and Mariana Martins, who focuses on the visual-image processing and positional stabilisation of the system. This work is expected to lead to a first joint publication this year. We aim to recruit more interns from the Technical Institute to collaborate on our project; also master students from Nijmegen may be sent to Lisbon for brief internships.
During the first half of the Action, the PI paid 23 short (typically 4-5 day) visits to the Lisbon group to collaborate on Subproject 3. A first paper on 3D eye movements was published in december 2018: Van Opstal, Strabismus, 2018. The coordinator and Bernardino recruited PhD2 in Sept. 2018 (A John, India) and actively searched for Postdoc 4, for which we have now attracted an excellent candidate from Egypt, dr. Omar Salah Nour, who will start in Oct or Nov 2019. In Lisbon, we currently work on an unconventional prototype of a rotating humanoid eye, in which we aim to model and control the six elastic eye muscles. Although other labs have produced more or less realistic eye movement robotic models, none have so far included the actual complexity of the true three degrees of freedom problem, leading to Donders' law and Listing's law, and the mechanical properties of the muscles. We currently adapt our system to also include dynamic viscosity, to better approach the overdamped mechanics of the biological human eye. For a demo see the Eye.

Our current progress was recently presented at a MiniSymposium in the Lisbon Visual Lab on June 21, 2019 (see the presentations).

See also IST news Sept 2019 for a nice News item of our project on the IST website (Sept 2019).



Back to the ORIENT home page.


Minisymposium at IST Visual Lab, Lisboa, June 21, 2019



On Friday, June 21, 2019, we organised a minisymposium at the IST Visual Lab in Lisboa, in which Alex hosted the meeting, and John, Akhil, Mariana and Carlos presented their ongoing work to the department on the 7th floor of Torre Norte. Below, find the pdf files of their presentations:
  1. John's presentation on the background of our robotics project is found here
  2. Akhil's presentation on the mechanical implementation of the 3D robotic eye is found here
  3. Mariana's presentation on the 3D visual mapping algorithms for rotation and translation of the robot's camera eye is found here
  4. Carlos' presentation on a 3D model of the robot's eye and the use of optimal control to understand Listing's law is found here

    Listing's Law for the eye: An optimal control principle?
    Listing's Law, measured for head-restrained monkey saccades in 3D. In quaternion laboratory coordinates (r'x,r'y,r'z), Listing's plane (r'x = ar'y + br'z) has a width of only 0.6 deg. The primary position (PP: (1, a, b)) points upwards (as the center of the oculomotor range, OMR, is about 15 deg downward); i.e., the plane is tilted re. gravity. Figure taken from Hess et al., Vision Res., 1992.
    There is controversy about the origin of this two degrees of freedom (dof) behaviour: mechanical constraints (e.g., muscle pulleys) vs. neural (control) strategies. The following arguments support a control strategy:
    1. Listing's law (rx = 0) only holds for the head upright and still, and the eyes looking at infinity.
    2. For vergence eye movements, ocular torsion depends on vertical eye orientation: eyes have three dof!
    3. During torsional head rotations, the eyes counter-roll: eyes have three dof!
    4. The eyes counter-roll during static head tilts: eyes have three dof!
    5. After small spontaneous violations of LL there is no passive drift back into the plane: The 3D eye's orientation is controlled!
    6. Small spontaneous violations of LL (up to 1-2 deg) are actively compensated by the next saccade: The saccadic system has three dof!
    7. After brainstem microstimulation (e.g. in riMLF, or NRTP; see van Opstal et al., J Neuroscience, 1996) the eyes will roll out of LP (by up to 10 deg!). However, the next saccade brings the eyes back to LP! Saccades really have three dof when needed!
    8. Indeed, during head-free gaze saccades, the eyes do not obey LL! However, they do after the head movement is finished (e.g. Tweed, Science, 1998): The saccadic system has three dof!
    No passive mechanical model can ever account for these findings! (but an active (i.e., neural) control model could ....)
    Computer simulations of random 3D saccades with a nonlinear mechanical model of our 3D robotic eye with three controllers, and 6 elastic 'muscles', at realistic insertion points on the globe (by Carlos). This model can generate 3D eye orientations over a large range (e.g., torsion up to 15 deg). Optimal control on a linearised version of this model is assumed to minimize the weighted sum of different costs. Here, we show two different strategies to account for Listing's law:
    Top figure includes three costs: saccade duration (p; time discount), total energy consumption during the trajectory (proportional to squared control velocity), and saccade accuracy; the latter requires the eye to be on the target T=(ry, rz) and in LP (i.e., rx = 0) at the end of the movement (at time p). PP is assumed to be straight ahead.
    Bottom figure includes four costs: duration (p), energy, saccade accuracy (but now only looking at the 2D target location; torsion left free), and total muscle force at each fixation (at p).
    Both models yield 3D single-axis rotations to generate saccade trajectories that keep the eye quite close to LP! The latter even yields a downward titled plane (dotted yellow line; the central OMR is at about 15 deg upward from PP), due to the force of gravity acting on our (macro-eye) system.
    Interestingly, the saccades followed nearly straight trajectories (i.e., they synchronized the 3D motor commands (top)), and yielded velocity profiles that obey the well-known nonlinear saccade main sequence (increase of saccade duration with amplitude, saturation of peak eye velocity with amplitude, and roughly a fixed-duration acceleration phase), although the model is (at present) free of (multiplicative) noise.



    Back to the ORIENT home page.


    Demonstrations.

    1. Here, you can view a demonstration of our two-axis VESTIBULAR CHAIR
    2. Torsional (clockwise/counter-clockwise) nystagmus of the eye, measured with our PUPIL LABS eye tracker (you can clearly see the rotation of the iris!). It unequivocally shows that the eye's gaze direction has three degrees of freedom.
    3. See here a brief demonstration of saccades made by our proto-type ROBOT EYE
    4. Here's an MRI MOVIE of the human eye (courtesy: Hotte et al., Transl. Vis Sci Technol 5:9, 2016) ) making large horizontal left-right saccades. Note the large sideway movements of the optic nerve through the viscuous fat surrounding the globe! It causes the transfer function of the ocular plant to be overdamped. Also note the lateral and medial rectus eye muscles.


    Click here, to see Jesse (opens new window)
    Current 2D audio-visual array in the chair (63 locations in azimuth and
    elevation). Each speaker also has a central LED for visual stimulation.
    Jesse makes eye-head movements to targets (here, shown in the light),
    while under passive two-axis sinusoidal vestibular rotation.



    Back to the ORIENT home page.


    List of publications from ORIENT

    1. A.J. Van Opstal, J. Vliegen, and T. Van Esch
      Reconstructing spectral cues for sound localization from responses to rippled noise stimuli.
      PloS ONE 12(3): e0174185, 2017 (doi)
    2. G.C. Van Bentum, A.J. Van Opstal, C. C.M.van Aartrijk and M.M Van Wanrooij
      Level-weighted response averaging in elevation to synchronous amplitude-modulated sounds.
      Journal of the Acoustical Society of America, 142: 3094-3103, 2017 (doi)
    3. B. Kasap and A.J. Van Opstal
      A spiking neural network model of the midbrain Superior Colliculus that generates saccadic motor commands.
      Biological Cybernetics,, 111: 249-268, 2017 (doi)
    4. P. Bremen, R. Massoudi, M.M. Van Wanrooij, and A.J. van Opstal
      Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man.
      Frontiers Behavioural Neuroscience 11: 89, 2017 (doi)
    5. R. Ege, A.J. van Opstal, P. Bremen and M.M. Van Wanrooij
      Testing the precedence effect in the median plane reveals backward spatial masking of sound.
      Scientific Reports (Nature) 8(1):8670, 2018 (doi)
    6. B. Kasap and A.J. Van Opstal
      A model for auditory-visual evoked eye-head gaze shifts in dynamic multi-steps.
      Journal of Neurophysiology 119: 1796-1808, 2018a (doi)
    7. B. Kasap and A.J. Van Opstal
      Dynamic parallelism for spike propagation in GPU accelerated spiking neural network simulations.
      Neurocomputing 302: 55-65, 2018b (doi)
    8. B. Kasap and A.J. Van Opstal
      Double stimulation in a spiking neural network model of the midbrain Superior Colliculus.
      Frontiers in Applied Mathematics and Statistics,, 4: 47, 2018c (doi)
    9. R. Ege, A.J. van Opstal, and M.M. Van Wanrooij
      Accuracy-precision trade-off in human sound localisation.
      Scientific Reports (Nature), , 8: 16399, 2018 (doi)
      (Supplementary material)
    10. B. Zoonooz, E. Arani, and A.J. Van Opstal
      Learning to localise weakly-informative sound spectra with and without feedback.
      Scientific Reports (Nature), 8: 17933, 2018 (doi)
    11. L.P.H. Van de Rijt, , M.M. Van Wanrooij, A.F>M. Snik, E.A.M. Mylanus, A.J. Van Opstal and A. Roye
      Measuring cortical activity during auditory processing with functional near-infrared spectroscopy.
      Journal of Hearing Science 8(4) pp. 9-18, 2018 (doi)
    12. K. Vogt, H. Frenzel, S.A. Ausili, D. Hollfelder, B. Wollenberg, A.F.M. Snik, and M.H.J. Agterberg
      Improved directional hearing of children with congenital unilateral conductive hearing loss implanted with an active bone-conduction implant or an active middle ear implant.
      Hearing Research 370: 238-247, 2018 (doi) .
    13. A.J. Van Opstal
      Editorial: 200 years Franciscus Cornelis Donders.
      Strabismus 26 (4) 159-162, 2018. (doi)
    14. B. Zoonooz, E. Arani, P.A.T.R. Aalbers, K.P. Koerding and A.J. Van Opstal
      Spectral weighting underlies perceived sound elevation.
      Scientific Reports (Nature), 9: 1642, 2019 (doi)
    15. B. Kasap and A.J. Van Opstal
      Microstimulation in a spiking neural network model of the midbrain superior colliculus.
      PLoS Computational Biology, 15(4): e1006522, 2019 (doi)
    16. R. Ege, A.J. van Opstal, and M.M. Van Wanrooij
      Experience shapes human sound-localisation behaviour.
      ENeuro (J Neuroscience) 6(2) 1-15, 2019 (doi)
    17. S.A. Ausili, B. Backus, M.J.H. Agterberg, A.J. Van Opstal, and M.M. Van Wanrooij
      Sound Localization in real-time vocoded cochlear-implant simulations with normal-hearing listeners.
      Trends in Hearing, 23:1-18, 2019 (doi)
    18. S. Sharma, L.H.M. Mens, A.F.M. Snik, A.J. Van Opstal, and M.M. Van Wanrooij
      An individual with hearing preservation and bimodal hearing using a cochlear implant and hearing aids has perturbed sound localization but preserved speech perception.
      Frontiers in Neurology, 10:637, 2019 (doi) .
    19. W.J. Huinck, E.A.M. Mylanus, and A.F.M. Snik
      Expanding unilateral cochlear implantation criteria for adults with bilateral acquired severe sensorineural hearing loss.
      European Archives of OtoRhinoLaryngology, 265: (5) 1313-1320, 2019 (doi)
    20. A. Snik, H. Maier, B. Hodgetts, M. Kompis, G. Mertens, P. van de Heijning, T. Lenarz, and A. Bosman
      Efficacy of Auditory Implants for Patients With Conductive and Mixed Hearing Loss Depends on Implant Center.
      Otology and Neurotology 40: 430-435, 2019 (doi)
    21. A.J. Van Opstal and B. Kasap
      Maps and sensorimotor transformations for eye-head gaze shifts: Role of the midbrain Superior Colliculus.
      Progress in Brain Research, Vol. 249, Ch. 2, pp. 19-33, 2019 (doi)
    22. A.J. Van Opstal and B. Kasap
      Electrical stimulation in a spiking neural network model of the monkey Superior Colliculus.
      Progress in Brain Research, Vol. 249, Ch. 11, pp. 153-166, 2019 (doi)
    23. L.P.H. Van de Rijt, , A. Roye, E.A.M. Mylanus, A.J. Van Opstal, and M.M. Van Wanrooij
      The principle of inverse effectiveness in audiovisual speech perception.
      Frontiers in Human Neuroscience 13:335, 2019 (doi)
    24. B. Zonooz, and A.J. van Opstal
      Differential adaptation in azimuth and elevation to acute monaural spatial hearing after training with visual feedback.
      ENeuro (J Neurosci), 6(5) 1-18, 2019 (doi)
    25. K. Vogt, J.-W. Wasmann, A.J. van Opstal, A.F.M. Snik, and M.J.H. Agterberg
      Contribution of spectral pinna cues for sound localization in children with congenital unilateral hearing loss after hearing rehabilitation.
      Hearing Research, in press, 2019 (doi)
      Upcoming:


    26. R. Ege, A.J. van Opstal, and M.M. Van Wanrooij
      Frequency transfer of the ventriloquism after effect.
      ENeuro (J Neurosci), to be submitted, 2019


    Back to the ORIENT home page.