I study how humans perceive, represent, and reason about the visual and physical world under uncertainty. My work explores how structured representations can explain perception and enable resource-rational, generalizable reasoning.
My research work is centered around three key directions:
- Studying how programs can serve as an interpretable form of knowledge representation
- Developing Probabilistic and Deep Learning methods that jointly reason over images and programs
- Enhancing Search and Inference with Large Language Models (LLMs)