00003
VISUAL EXPLORATION IN A NON-HUMAN PRIMATE MODEL OF ARTIFICIAL VISION

Saturday, February 18, 2017
Exhibit Hall (Hynes Convention Center)
Nathaniel Killian, Harvard Medical School - MGH, Boston, MA
We have previously shown that non-human primates are capable of learning to recognize the letters of the Roman alphabet at variable size and with discrete visual elements, simulated phosphenes. Here, we have examined the eye movement patterns of three non-human primates performing a letter recognition task in a visual prosthesis simulation. We found that, while eye movements were innate and present throughout learning, the animals’ performance improved as they honed their usage of eye movements to explore the various features of glyphs. In particular, saccade rates increased during learning and higher saccade rates were correlated with greater performance throughout most of the learning time course. Each animal employed unique global and local patterns of eye movements. The global patterns were identified at the outset of learning, suggesting the distinct exploration styles were a result of innate biases rather than high-level strategies. When examining individual letters, we found that each animal had their own preferential viewing regions for each letter, e.g. the vertices of V or the opening in C. Viewing of these specific regions was detected on all conditions down to the lowest phosphene densities and at all but the smallest font size, demonstrating that the animals recognized individual letters even in some of the hardest conditions where performance above chance was barely detectable. Human subjects, who were inherently high-performing, showed a surprising lack of exploratory eye movements in this task, which may reflect expertise and that they are able to gather sufficient information from non-central locations in the visual field. Indeed, we expect that use of the oculomotor system will become more efficient as learning progresses. In support of this idea, we found that as animals became proficient at recognizing clear text letters, their saccade rates gradually decreased. All animals performed better when making more saccades within the glyph region and when having high microsaccadic jitter. Furthermore, the fastest-learning animal used an exploration style characterized by a high saccade rate with high microsaccadic jitter. Our results suggest that training visual prosthesis recipients to explore with a high saccade rate, high jitter style or artificially mimicking this style will improve performance and learning to use phosphene vision.