The choice of visual representation is key to scaling generalist robot policies. However, direct evaluation via policy rollouts is expensive, even in simulation. Existing proxy metrics focus on the representation's capacity to capture narrow aspects of the visual world, like object shape, limiting generalization across environments. In this paper, we take an analytical perspective: we probe pretrained visual encoders by measuring how well they support decoding of environment state—including geometry, object structure, and physical attributes—from images. Leveraging simulation environments with access to ground-truth state, we show that this probing accuracy strongly correlates with downstream policy performance across diverse environments and learning settings, significantly outperforming prior metrics. Our study provides insight into the representational properties that support generalizable manipulation, suggesting that learning to encode full environment state is a promising objective for visual representations for control.
We present a systematic comparison of diverse pretrained vision encoders across multiple benchmarks. The results illustrate that there is no single optimal representation for robot manipulation; performance vary significantly depending on execution domains and tasks.
We evaluate the state regression objective as a proxy for predicting the success rate of downstream policies. As shown below, our proposed proxy task shows a strong correlation and MMRV score in all four different environments (MetaWorld, RoboCasa, SimplerEnv-Google, SimplerEnv-WidowX).
@inproceedings{dong2026visualenv,
title={Capturing Visual Environment Structure Correlates with Control Performance},
author={Dong, Jiahua and Man, Yunze and Tokmakov, Pavel and Wang, Yu-Xiong},
booktitle={ICLR},
year={2026}
}