SPEAKERS/PARISA ABEDI KHOOZANI
Parisa Abedi Khoozani

Parisa Abedi Khoozani

Centre for Vision Research, York University

Parisa's lectures

lecture cover

CAN-ACN 2021

2-D-224 - Effector-specific spatial codes in dorsolateral prefrontal cortex during a head-unrestrained reach task.

Authors:

Veronica Nacher¹, Parisa Abedi-Khoozani¹, Vishal Bharmauria¹, Harbandhan Arora¹, Xiaogang Yan¹, Saihong Sun¹, Hongying Wang¹, John Crawford¹

¹York University

Abstract:

Dorsolateral prefrontal cortex (DLPFC) is associated with executive control and response selection, but the extent to which it is involved in effector-specific transformations is unclear. We addressed this question by recording single neurons from dorsolateral prefrontal cortex (DLPFC) while two trained monkeys performed a head-unrestrained reaching paradigm that allowed freely coordinated motion of gaze, head reaching in depth. Animals touched one of three central LEDs at waist level while maintaining gaze on a central fixation dot and were rewarded if they touched a target appearing at one of 15 locations in a 40° x 20° (visual angle) array. Preliminary analysis of 271 neurons in both monkeys showed an assortment of target/stimulus, gaze, pre-reach and reach-timed responses in DLPFC. We first tested for gaze, head, and hand gain fields during the different neuronal responses and found that 38% of the responses were gain modulated by initial hand position. A small fraction of neurons showed gain fields for initial eye position (4%), and for both initial eye and hand position (6%). After removing the gain field effects, we fitted the residual data against various spatial models and found that the visual response best encoded the target relative to space (Ts), whereas responses at gaze and hand onset showed a tendency towards coding hand displacement (dA). In addition, some (20%) neurons showed a preference for coding final head position or displacement. A more complete analysis will describe the complete coding and distribution of gaze, head, and reach signals in this region.

lecture cover

CAN-ACN 2021

2-D-227 - Mechanisms for integrating allocentric and egocentric visual information for goal-directed movements: a neural network approach

Authors:

Parisa Abedi Khoozani¹, Vishal Bharmauria¹, Adrian Schütz2², Richard Wildes¹, Douglas Crawford¹

¹York University, ²Philipps-University Marburg

Abstract:

Allocentric (landmark-centered) and egocentric (eye-centered) visual information are optimally integrated for goal-directed movements. This process has been observed within the supplementary and frontal eye fields, but the underlying processes for this combination remain a puzzle, mainly due to inadequacy of current theoretical models to explain data at different levels (i.e., behavior, single neuron, and distributed network). Here, we propose a physiologically inspired neural network with two major components: First, a Convolutional Neural Network (CNN) is used to extract the allocentric information (target and landmark): We used repeated (2 layers) convolution, rectifications, and normalization followed by a feature pooling layer to extract allocentric information (target and landmark). Second, a Multi-Layer Perceptron network (MLP, 3 fully connected layers) is used to incrementally transform allocentric information into an integrated motor response. We added an additional layer to transform motor responses into final gaze positions. The network was trained on both idealized and actual monkey gaze behavior. MLP output units accurately simulated prefrontal motor responses (including open-ended response fields that shifted partially with the landmark) and their decoded output achieved good correspondence (MLP: R2 = 0.80) with actual gaze behavior (Bharmauria et al. Cerebral Cortex 2020). These results suggest that our framework works and provides a suitable tool to study the underlying mechanisms of allocentric and egocentric integration. Supported by a VISTA Program fellowship.

PLATFORM

  • Home
  • Events
  • Video Library

COMPANY

RESOURCES

Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2020 Underline - All rights reserved

Made with ❤️ in New York City