To view PDF files

You need Adobe Reader 7.0 or later in order to read PDF files on this site.
If Adobe Reader is not installed on your computer, click the button below and go to the download site.

Regular Articles

Vol. 12, No. 7, pp. 30–37, July 2014. https://doi.org/10.53829/ntr201407ra1

Understanding the Coordination Mechanisms of Gaze and Arm Movements

Naotoshi Abekawa and Hiroaki Gomi

Abstract

We report on the coordination mechanisms of gaze and arm movements in the visuomotor control process. The brain information-processing mechanisms underlying the eyes, arms, and their coordinated movements need to be understood in order to design sophisticated and user-friendly interactive human-machine interfaces. In this article, we first review the experimental research on visuomotor control, primarily from the viewpoints of eye-hand coordination and online feedback control. Then, we introduce our recent experimental studies investigating the eye-hand coordination mechanism during online feedback control. The experimental results provide evidence that reaching corrections that are rapidly and automatically induced by visual perturbations are influenced by changes in gaze direction. These results suggest that an online reach controller closely interacts with gaze systems.

Keywords: visuomotor control, eye-hand coordination, online feedback control

PDF PDF

1. Introduction

We can make arm movements to grasp a cup, manipulate a computer with a mouse, or hit a moving fastball. These motor functions are achieved entirely by our brain’s information processing system. Gaining a deep understanding of the brain mechanisms of human motor behaviors is fundamental in order to design user-friendly interactive human-machine interfaces using future information technologies.

Humans perform various visually guided actions in daily life. To perform movements that involve reaching the arm toward a visual target, we have to look at the target and then detect its location relative to the hand. In this situation, gaze behavior should be coordinated with hand movements in a spatiotemporally appropriate manner. Furthermore, visual information that falls on the retina is integrated with other sensory information related to gaze direction, head orientation, or hand location, so as to be transformed into desired muscle contractions. These cooperative aspects of gaze and hand systems are referred to as eye-hand coordination (discussed in detail in section 2.1).

In addition to eye-hand coordination, one important aspect of our daily actions is dynamic interaction with the external world. Consider playing tennis as an example. Tennis players have to run and hit the ball even though its flying trajectory irregularly changes. In this dynamic situation, the players must correct their reaching trajectory as quickly as possible in response to unpredictable perturbations such as a sudden shift of the target or body movement. The reaching correction of movement midflight is mediated by an online feedback controller (discussed in detail in section 2.2).

Based on the fact that the hand and eye always move cooperatively in our daily lives [1], both motor systems seem to be tightly coupled with each other during online feedback control. However, until now this issue had not been sufficiently addressed in related research fields despite its importance. In this article, we review recent studies on eye-hand coordination and online feedback control in section 2. In section 3, we introduce our experimental studies on eye-hand coordination during online feedback control. We conclude the article in section 4.

2. Background

2.1 Eye-hand coordination

We discuss eye-hand coordination in terms of two aspects. The first aspect is the spatiotemporal coordination of gaze and reaching behaviors. These behaviors were observed in an experimental task where participants looked at and reached toward a designated target. The arm movements in such a task are typically preceded by a rapid movement of the eye to the target, called a saccade. Saccadic eye movement before the hand reaction can provide some crucial information on the hand motor system, for example, visual information, target representation, or motor commands [2]. Indeed, several studies have found that eye movements affect concurrent hand movements such as reaction time, initial acceleration, or final position [3]–[6]. In addition, saccades and hand reaction times are temporally correlated with each other, suggesting that both motor systems share a common neural processing [7], [8]. Meanwhile, it is known that arm-reaching movements can also affect gaze systems. The reaction time of a saccade differs between when it was made with an arm-reaching movement and when it was made without such a movement [9], [10]. In addition to this temporal effect, arm-reaching movement affects the landing point of each saccade, while saccades track unseen arm-reaching movements from the participant [11]. These findings suggest that the hand and eye motor systems interact with each other in visually guided reaching actions.

The second point regarding eye-hand coordination is coordinate transformation from visual to body space. When making an arm-reaching movement, the brain should code the locations of both the target and the hand in a common frame of reference so as to compute the difference vector from the hand to the target. The classical ideas on this topic suggest that body-centered coordinates are utilized for this common reference frame [12], [13]. That is, visual information about the target location on the retina is transformed into body-centered coordinates by integrating the target location on the retina with the eye orientation in the orbit, the head rotation relative to the shoulder, and the arm posture. However, more recent behavioral, imaging, and neurophysiological studies have revealed that the locations of the target and hand are represented in a gaze-centered frame of reference, and the posterior parietal cortex (PPC) is involved in constructing this representation [14]–[17]. The difference vector between hand and target is also computed using this gaze-centered representation [18]. The gaze-centered representation of the target and hand should be updated quickly depending on each eye movement, and this updating process is called spatial remapping. This spatial remapping was found to occur predictively before the actual eye movement using oculomotor preparatory signals [19].

2.2 Online feedback control

Online feedback control in visuomotor processing has been investigated using a visual perturbation paradigm since the 1980s [20]–[22]. In this experimental paradigm, the location of the reaching target changed unexpectedly during a reaching movement. The results showed that participants can adjust their reaching trajectories rapidly and smoothly in response to such shifting targets. Interestingly, the reaction latency of this online correction is 120–150 ms after the target shift, which is much quicker compared to that for voluntary motor reactions to a static visual target (more than 150–200 ms) [23]. Furthermore, this correction can be initiated even when the participant is not aware of the target shift [24]. These results suggest that the online reaching correction to the target shift is mediated by reflexive mechanisms that differ from the mechanism underlying the voluntary motor reaction to a static visual target [25]. Patient [26], [27], transcranial magnetic stimulation [28], and imaging [29], [30] studies have found that the PPC plays a significant role in producing this rapid and reflexive online reaching correction.

As mentioned in the Introduction, we have to respond not only to external changes (i.e., target shifts) but also to our own body movements. When such body movements occur, the eyes on the head frequently receive background visual motion that is opposite that of the body movement. This can be easily understood if we consider a hand-held video camera. Since our hands usually shake during recording, the visual image on the screen of the camera moves in the opposite direction from our hand movements. This fact suggests that visual motion can be utilized to control reaching movements against the body movement. Indeed, this theory is supported by several studies [31]—[34]. In these studies, a large-field visual motion was presented on a screen during a reaching movement. The results showed that the reaching trajectory shifted rapidly (100–150 ms) and unintentionally toward the direction of visual motion. This is known as a manual following response (MFR). Several studies have shown the functional significance of the MFR by observing the reaching accuracy when visual motion and actual perturbations to the participants’ posture were introduced simultaneously [32], [35]. The computational and physiological mechanisms underlying the MFR have not yet been fully elucidated; however, it is thought that some cortical areas related to visual motion processing such as the middle temporal or the medial superior temporal areas contribute to generating the MFR [36].

3. Eye-hand coordination in online feedback control

Although eye-hand coordination has been examined widely as described in the previous section, the experimental task used in those studies was restricted to a reaching movement from a static posture toward a stationary target. Therefore, the previous studies focused mainly on the coordination mechanism during the motor planning process. However, during the execution of reaching, it is necessary to correct reaching trajectories rapidly in response to unexpected perturbations. Visually guided online corrections would be mediated by the reflexive mechanism described in section 2.2. In this section, we describe our recent experimental studies, which focused on the eye-hand coordination mechanism during online visuomotor control.

3.1 Eye movements and reaching corrections with a target shift

We observed eye movements and online reaching correction when a target was shifted during the reaching movement [37]. The experimental apparatus is shown in Fig. 1(a). Participants (n = 17) made a reaching movement in a forward direction on a digitizer while holding a stylus pen. The pen location was presented on the monitor as a black cursor. At the start of each trial, participants placed the cursor into the start box (a square at the bottom of the monitor) while maintaining eye fixation on the central fixation cross (Fig. 1(b)). Then, a reaching target was presented over the fixation cross, cuing participants to initiate reaching (distance: 22 cm, duration: 0.6 s). In randomly selected trials, the target shifted 7.6 cm rightward (32/96 trials) or leftward (32 trials) 100 ms after the reaching initiation. In target-jump trials, participants were required to make smooth online reaching corrections to the new target location as quickly as possible. In the remaining 32 trials, the target was kept stationary, and participants continued to reach toward the original target location.


Fig. 1. Experimental paradigm in first study.

To examine the effect of gaze behavior on the reaching correction, we conducted this reaching task under two gaze conditions: saccade (SAC) or fixation (FIX). In the SAC condition, participants had to make the reaching correction with a saccadic eye movement to the new target location, whereas in the FIX condition, participants made the reaching correction while maintaining eye fixation on the central fixation cross (i.e., the original target location). Each gaze condition was run in separate blocks of 48 trials.

Reaching trajectories obtained by a typical participant in the SAC condition are shown in Fig. 2(a). The trajectory deviated smoothly during a reach according to the direction of the target shift. To evaluate the initiation of the reaching correction in more detail, we calculated x-hand accelerations (the axis along the target shift), which are temporally aligned at the onset of the target shift (Fig. 2(b)). The hand response corresponding to each direction of the target shift rapidly deviated about 150 ms after the target shift. The response latency of the reaching correction and saccade is indicated by a filled (143 ms) and open triangle (187 ms), respectively. This temporal relationship (reaching correction that preceded saccade initiation) was obtained for all the participants. Hand and eye latencies for all trials across all participants are plotted in Fig. 2(c). Most of the data (81.6%) fell above the diagonal line, indicating that the reaching correction was usually initiated prior to the onset of the eye movement. This temporal difference was statistically significant in a paired t-test (p < 0.001), as shown in Fig. 2(d).

The hand-first and eye-second pattern observed in this study indicates that the online reaching correction can be initiated by peripheral visual information before the eye movement. This temporal order differs completely from that reported in conventional eye-hand tasks. The eye-first and hand-second pattern in the conventional task indicates that the reaches initiated from a static posture rely on the central visual information. A recent imaging study has shown that compared with central reaching, peripheral reaching involved an extensive cortical network including the PPC [38]. Thus, these findings again support the idea that distinct brain mechanisms are involved between motor planning for the voluntary reaching initiation and online feedback control during the motor execution.

We next investigated the dependence of the reaching correction on saccadic eye movements. Firstly, we observed that the initiation of the reaching correction was temporally correlated with saccade onset (correlation coefficient = 0.39, p < 0.001, Fig. 2(c)). Correlation was significant (p < 0.05) in 13 and marginally significant (p < 0.1) in 1 out of 17 participants. Secondly, we found that the latency of the reaching correction changed according to the gaze conditions. The reaching correction was significantly (p < 0.05) faster for the SAC than for the FIX condition, as shown in Fig. 2(e).


Fig. 2. Results of first study.

The correlation finding indicates that the hand and eye control systems do not act independently; rather, they share common processing at some stage. Furthermore, the dependence of the reaching correction on the gaze condition suggests that the hand and eye motor system interacts closely even during the online feedback control. Since a saccade was not yet initiated when the reaching correction started, the change in hand latency cannot be ascribed to any changes in visual or oculomotor signals obtained after the actual eye movements. Our findings imply that an online reaching controller interacts with oculomotor preparation signals before the actual eye movements.

3.2 Gaze direction and reaching corrections to background visual motion

This study focused on reaching corrections induced by visual motion (MFR). In this case, visual motion that was applied during the reaching did not induce explicit eye movements. Thus, to address the mechanism of eye-hand coordination, we examined the effect of gaze direction on MFR [39]. Gaze direction relative to the reaching target is known to be a key feature in constructing gaze-centered spatial representation, as described in section 2.1.

The experimental apparatus is shown in Fig. 3(a). Participants (n = 6) were seated on a chair in front of a back-projection screen. The participants were asked to make a reaching movement in a forward direction (distance of about 39 cm) toward a 1 cm2 piece of rubber. The target and the participant’s hand were completely occluded from view. The time course of a single trial is shown in Fig. 3(b). At the start of each trial, participants touched the target location to confirm its location. Then, participants pressed a button followed by the presentation of a stationary random checkerboard pattern and a fixation marker. After the eye fixation marker was presented, beep sounds were made to cue participants to initiate a reaching movement while maintaining the eye fixation. In randomly selected trials, a background visual stimulus moved upward (16/48 trials) or downward (16 trials) for 500 ms shortly after the reaching initiation. In the remaining 16 trials, the visual stimulus was kept stationary. Participants were asked to reach toward the target location regardless of whether or not the visual motion was presented.

Participants performed this reaching task under four gaze-reaching configurations (0, 20, 40, and 60°), as shown in Fig. 3(c). In the 0° condition (left panel in Fig. 3(d)), the head was oriented straight ahead, and the screen was located in front of the participants. In this condition, the gaze direction matched the target location. In the 60° condition (right panel in Fig. 3(d)), the head was rotated 60° to the left, and the screen location changed so that the visual stimuli on it were presented to participants in the same way in the four gaze conditions. In this condition, the gaze direction was away from the reaching target. Thus, in the 20° and 40° conditions, the rotation angle was 20° and 40°, respectively. In all the conditions, participants made the same reaching movements toward the identical target location. Therefore, this paradigm allows us to examine the effect of gaze-reach coordination on the online manual response to the visual motion.


Fig. 3. Experimental paradigm in second study.

Typical reaching trajectories in the 0° condition (two participants: P1 and P2) are shown in Fig. 4(a). When the background visual stimulus moved during a reaching movement, the reaching trajectory deviated in the direction of visual motion (blue line for upward and red line for downward). This reflexive MFR was observed in all participants. To analyze the response in more detail, we calculated the hand accelerations (acc.) along a z-axis (the direction of visual motion) that were temporally aligned at the onset of visual motion (Fig. 4(b) with the same participants as in Fig. 4(a)). We obtained the difference in the hand acceleration between the upward and downward visual motions. This temporal difference for the 0° and 60° conditions is shown in Fig. 5(a) (data for P1). Interestingly, the manual response was larger for the 0° condition than for the 60° condition even though the identical visual motion was applied in both conditions. We quantified the response amplitude by estimating the mean response between 100 and 200 ms after the onset of visual motion (black solid line in Fig. 5(a)). A comparison of the response amplitudes averaged across participants for all gaze conditions (0, 20, 40, and 60°) is given in Fig. 5(b). The MFR was largest for the 0° condition, and the response amplitude significantly decreased as the gaze direction deviated from the reaching target (ANOVA (analysis of variance), p < 0.05).


Fig. 4. Results of second study.


Fig. 5. MFR modulation.

These results indicate that the automatic manual response induced by visual motion is modulated flexibly by the spatial relationship between gaze and reaching target. This spatial relationship is computed by the gaze-centered target representation, which is constructed in the PPC during reach planning. The MFR gain modulation that is based on the gaze-reach coordination can be associated with our natural behavior, namely, that we usually gaze at the reach target when highly accurate reaching is required. Thus, we infer that visuomotor gain for the online reaching correction is functionally modulated by the gaze-reach coordination.

4. Conclusion

We investigated mechanisms of eye-hand coordination during online visual feedback control. When a target shift or background visual motion is applied during reaching, rapid and reflexive online corrections can occur. Our studies provide experimental evidence that the online controller for arm reaching interacts closely with gaze systems.

The first study showed that the reaching correction to the target shift was temporally correlated with saccade onset and changed according to the gaze behavior that started after the initiation of reach correction. This suggests that an online reaching controller interacts with gaze signals related to planning eye movements. The second study revealed that the amplitude of the reaching correction to the visual motion changed according to the spatial relationship between gaze and the reach target. This suggests that the visuomotor gain for the reflexive online controller is functionally modulated by the eye-hand coordination.

Eye-hand coordination is a basic aspect of visually guided motor actions that occur frequently in our daily lives. In addition, quick online motor corrections are one of the bases supporting our skillful motor actions in dynamic environments. Thus, we believe that understanding the brain mechanisms underlying these visuomotor functions will provide important guidelines on how to develop user-friendly human-machine interfaces in the near future.

References

[1] M. F. Land and B. W. Tatler, Looking and Acting: Vision and Eye Movements in Natural Behaviour. Oxford; New York: Oxford University Press, 2009.
[2] H. Bekkering and U. Sailer, “Commentary: Coordination of Eye and Hand in Time and Space,” Prog. Brain Res., Vol. 140, pp. 365–373, 2002.
[3] P. van Donkelaar, “Eye-hand Interactions during Goal-directed Pointing Movements,” Neuroreport, Vol. 8, No. 9–10, pp. 2139–2142, Jul. 1997.
[4] P. van Donkelaar, “Saccade Amplitude Influences Pointing Movement Kinematics,” Neuroreport, Vol. 9, No. 9, pp. 2015–2018, Jun. 1998.
[5] G. Binsted and D. Elliott, “Ocular Perturbations and Retinal/Extraretinal Information: the Coordination of Saccadic and Manual Movements,” Exp. Brain Res., Vol. 127, No. 2, pp. 193–206, Jul. 1999.
[6] J. F. Soechting, K. C. Engel, and M. Flanders, “The Duncker Illusion and Eye-hand Coordination,” J. Neurophysiol., Vol. 85, No. 2, pp. 843–854, Feb. 2001.
[7] U. Sailer, T. Eggert, J. Ditterich, and A. Straube, “Spatial and Temporal Aspects of Eye-hand Coordination Across Different Tasks,” Exp. Brain Res., Vol. 134, No. 2, pp. 163–173, Sep. 2000.
[8] H. L. Dean, D. Martí, E. Tsui, J. Rinzel, and B. Pesaran, “Reaction Time Correlations during Eye-hand Coordination: Behavior and Modeling,” J. Neurosci., Vol. 31, No. 7, pp. 2399–2412, Feb. 2011.
[9] L. Lünenburger, D. F. Kutz, and K. P. Hoffmann, “Influence of Arm Movements on Saccades in Humans,” Eur. J. Neurosci., Vol. 12, No. 11, pp. 4107–4116, Nov. 2000.
[10] S. F. Neggers and H. Bekkering, “Ocular Gaze is Anchored to the Target of an Ongoing Pointing Movement,” J. Neurophysiol., Vol. 83, No. 2, pp. 639–651, Feb. 2000.
[11] G. Ariff, O. Donchin, T. Nanayakkara, and R. Shadmehr, “A Real-time State Predictor in Motor Control: Study of Saccadic Eye Movements during Unseen Reaching Movements,” J. Neurosci., Vol. 22, No. 17, pp. 7721–7729, Sep. 2002.
[12] M. Flanders, S. I. H. Tillery, and J. F. Soechting, “Early Stages in a Sensorimotor Transformation,” Behav. Brain Sci., Vol. 15, No. 02, pp. 309–320, 1992.
[13] J. McIntyre, F. Stratta, and F. Lacquaniti, “Viewer-centered Frame of Reference for Pointing to Memorized Targets in Three-dimensional Space,” J. Neurophysiol., Vol. 78, No. 3, pp. 1601–1618, Sep. 1997.
[14] D. Y. Henriques, E. M. Klier, M. A. Smith, D. Lowy, and J. D. Crawford, “Gaze-centered Remapping of Remembered Visual Space in an Open-loop Pointing Task,” J. Neurosci., Vol. 18, No. 4, pp. 1583–1594, Feb. 1998.
[15] W. P. Medendorp and J. D. Crawford, “Visuospatial Updating of Reaching Targets in Near and Far Space,” Neuroreport, Vol. 13, No. 5, pp. 633–636, Apr. 2002.
[16] W. P. Medendorp, H. C. Goltz, T. Vilis, and J. D. Crawford, “Gaze-centered Updating of Visual Space in Human Parietal Cortex,” J. Neurosci., Vol. 23, No. 15, pp. 6209–6214, Jul. 2003.
[17] A. P. Batista, C. A. Buneo, L. H. Snyder, and R. A. Andersen, “Reach Plans in Eye-centered Coordinates,” Science, Vol. 285, No. 5425, pp. 257–260, Jul. 1999.
[18] C. A. Buneo, M. R. Jarvis, A. P. Batista, and R. A. Andersen, “Direct Visuomotor Transformations for Reaching,” Nature, Vol. 416, No. 6881, pp. 632–636, Apr. 2002.
[19] J. R. Duhamel, C. L. Colby, and M. E. Goldberg, “The Updating of the Representation of Visual Space in Parietal Cortex by Intended Eye Movements,” Science, Vol. 255, No. 5040, pp. 90–92, Jan. 1992.
[20] M. A. Goodale, D. Pelisson, and C. Prablanc, “Large Adjustments in Visually Guided Reaching Do Not Depend on Vision of the Hand or Perception of Target Displacement,” Nature, Vol. 320, No. 6064, pp. 748–750, Apr. 1986.
[21] D. Pélisson, C. Prablanc, M. A. Goodale, and M. Jeannerod, “Visual Control of Reaching Movements without Vision of the Limb. II. Evidence of Fast Unconscious Processes Correcting the Trajectory of the Hand to the Final Position of a Double-step Stimulus,” Exp. Brain Res., Vol. 62, No. 2, pp. 303–311, 1986.
[22] C. Prablanc and O. Martin, “Automatic Control during Hand Reaching at Undetected Two-dimensional Target Displacements,” J. Neurophysiol., Vol. 67, No. 2, pp. 455–469, Feb. 1992.
[23] B. L. Day and I. N. Lyon, “Voluntary Modification of Automatic Arm Movements Evoked by Motion of a Visual Target,” Exp. Brain Res., Vol. 130, No. 2, pp. 159–168, Jan. 2000.
[24] V. Gritsenko, S. Yakovenko, and J. F. Kalaska, “Integration of Predictive Feedforward and Sensory Feedback Signals for Online Control of Visually Guided Movement,” J. Neurophysiol., Vol. 102, No. 2, pp. 914–930, Aug. 2009.
[25] H. Gomi, “Implicit Online Corrections of Reaching Movements,” Curr. Opin. Neurobiol., Vol. 18, No. 6, pp. 558–564, Dec. 2008.
[26] L. Pisella, H. Gréa, C. Tilikete, A. Vighetto, M. Desmurget, G. Rode, D. Boisson, and Y. Rossetti, “An ‘Automatic Pilot’ for the Hand in Human Posterior Parietal Cortex: Toward Reinterpreting Optic Ataxia,” Nat. Neurosci., Vol. 3, No. 7, pp. 729–736, Jul. 2000.
[27] H. Gréa, L. Pisella, Y. Rossetti, M. Desmurget, C. Tilikete, S. Grafton, C. Prablanc, and A. Vighetto, “A Lesion of the Posterior Parietal Cortex Disrupts On-line Adjustments during Aiming Movements,” Neuropsychologia, Vol. 40, No. 13, pp. 2471–2480, 2002.
[28] M. Desmurget, C. M. Epstein, R. S. Turner, C. Prablanc, G. E. Alexander, and S. T. Grafton, “Role of the Posterior Parietal Cortex in Updating Reaching Movements to a Visual Target,” Nat. Neurosci., Vol. 2, No. 6, pp. 563–567, Jun. 1999.
[29] M. Desmurget, H. Gréa, J. S. Grethe, C. Prablanc, G. E. Alexander, and S. T. Grafton, “Functional Anatomy of Nonvisual Feedback Loops during Reaching: A Positron Emission Tomography Study,” J. Neurosci., Vol. 21, No. 8, pp. 2919–2928, Apr. 2001.
[30] A. Reichenbach, J.-P. Bresciani, A. Peer, H. H. Bülthoff, and A. Thielscher, “Contributions of the PPC to Online Control of Visually Guided Reaching Movements Assessed with fMRI-guided TMS,” Cereb. Cortex N. Y. N 1991, Vol. 21, No. 7, pp. 1602–1612, Jul. 2011.
[31] E. Brenner and J. B. J. Smeets, “Fast Responses of the Human Hand to Changes in Target Position,” J. Mot. Behav., Vol. 29, No. 4, pp. 297–310, Dec. 1997.
[32] D. Whitney, D. A. Westwood, and M. A. Goodale, “The Influence of Visual Motion on Fast Reaching Movements to a Stationary Object,” Nature, Vol. 423, No. 6942, pp. 869–873, Jun. 2003.
[33] N. Saijo, I. Murakami, S. Nishida, and H. Gomi, “Large-field Visual Motion Directly Induces an Involuntary Rapid Manual Following Response,” J. Neurosci., Vol. 25, No. 20, pp. 4941–4951, May 2005.
[34] H. Gomi, N. Abekawa, and S. Nishida, “Spatiotemporal Tuning of Rapid Interactions between Visual-motion Analysis and Reaching Movement,” J. Neurosci., Vol. 26, No. 20, pp. 5301–5308, May 2006.
[35] H. Gomi, K. Kadota, and N. Abekawa, “Dynamic Reaching Adjustment during Continuous Body Perturbation is Markedly Improved by Visual Motion,” Soc. Neurosci. 40th Annu. Meet., 2010.
[36] A. Takemura, N. Abekawa, K. Kawano, and H. Gomi, “Short-latency Manual Responses of Monkey are Impaired by Lesions in the MST,” Soc. Neurosci. 38th Annu. Meet., 2008.
[37] N. Abekawa, T. Inui, and H. Gomi, “Eye-hand Coordination in On-line Visuomotor Adjustments,” Neuroreport, Vol. 25, No. 7, pp. 441–445, May 2014.
[38] J. Prado, S. Clavagnier, H. Otzenberger, C. Scheiber, H. Kennedy, and M.-T. Perenin, “Two Cortical Systems for Reaching in Central and Peripheral Vision,” Neuron, Vol. 48, No. 5, pp. 849–858, Dec. 2005.
[39] N. Abekawa and H. Gomi, “Spatial Coincidence of Intentional Actions Modulates an Implicit Visuomotor Control,” J. Neurophysiol., Vol. 103, No. 5, pp. 2717–2727, May 2010.
Naotoshi Abekawa
Research Scientist, Human and Information Science Laboratory, NTT Communication Science Laboratories.
He received the B.E. from Tokyo Metropolitan University in 2003, the M.E. from Tokyo Institute of Technology in 2005, and the Ph.D. from Kyoto University in 2013. He joined NTT in 2005 and has been engaged in research on human information processing. His research interests include human sensorimotor mechanisms, especially visuomotor control properties. He is a member of the Society for Neuroscience, the Japan Neuroscience Society, and the Japanese Neural Network Society.
Hiroaki Gomi
Distinguished Senior Research Scientist, Group Leader of Sensory and Motor Research Group, NTT Communication Science Laboratories.
He received the B.E., M.E., and Ph.D. in mechanical engineering from Waseda University, Tokyo, in 1986, 1988, and 1994, respectively. He conducted research on computational biological motor control at ATR (Advanced Telecommunication Research Labs., Kyoto) from 1989 to 1994, in two JST-CREST projects from 1996 to 2003, and in a JST-ERATO project from 2005 to 2010. He was an adjunct professor at Tokyo Institute of Technology from 2000 to 2004. His current research interests include human sensorimotor control mechanisms and interactions between perception and action.

↑ TOP