Ruiqi Gao ^{1*},
Jianwen Xie ^{2*},
Song-Chun Zhu ^{1},
and Ying Nian Wu ^{1}

(* Equal contributions)

^{1} University of California, Los Angeles (UCLA), USA

^{2} Hikvision Research Institute, Santa Clara, USA

This paper proposes a representational model for grid cells. In this model, the 2D self-position of the agent is represented by a high-dimensional vector, and the 2D self-motion or displacement of the agent is represented by a matrix that transforms the vector. Each component of the vector is a unit or a cell. The model consists of the following three sub-models. (1) Vector-matrix multiplication. The movement from the current position to the next position is modeled by matrix-vector multiplication, i.e., the vector of the next position is obtained by multiplying the matrix of the motion to the vector of the current position. (2) Magnified local isometry. The angle between two nearby vectors equals the Euclidean distance between the two corresponding positions multiplied by a magnifying factor. (3) Global adjacency kernel. The inner product between two vectors measures the adjacency between the two corresponding positions, which is defined by a kernel function of the Euclidean distance between the two positions. Our representational model has explicit algebra and geometry. It can learn hexagon patterns of grid cells, and it is capable of error correction, path integral and path planning.

The paper can be downloaded here.

The poster can be downloaded here.

The Python code using tensorflow can be downloaded here.

If you wish to use our code or results, please cite the following paper:

@article{gao2018learning, title={Learning Grid-like Units with Vector Representation of Self-Position and Matrix Representation of Self-Motion}, author={Gao, Ruiqi and Xie, Jianwen and Zhu, Song-Chun and Wu, Ying Nian}, journal={arXiv preprint arXiv:1810.05597}, year={2018}}

Contents

We thank the three reviewers for their insightful comments and suggestions. Part of the work was done while Ruiqi Gao was an intern at Hikvision Research Institute during the summer of 2018. She thanks Director Jane Chen for her help and guidance. We also thank Jiayu Wu for her help with experiments and Zilong Zheng for his help with visualization. The work is supported by DARPA XAI project N66001-17-2-4029; ARO project W911NF1810296; ONR MURI project N00014-16-1-2007; and a Hikvision gift to UCLA. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.

[1] Hafting, Torkel, et al. "Microstructure of a spatial map in the entorhinal cortex." *Nature* 436.7052 (2005): 801.

[2] Fyhn, Marianne, et al. "Grid cells in mice." *Hippocampus* 18.12 (2008): 1230-1238.

[3] Banino, Andrea, et al. "Vector-based navigation using grid-like representations in artificial agents." *Nature* 557.7705 (2018): 429.

[4] Cueva, Christopher J., and Xue-Xin Wei. "Emergence of grid-like representations by training recurrent neural networks to perform spatial localization." *arXiv preprint arXiv:1803.07770* (2018).