Previous research in spatial cognition has often relied on simple spatial tasks in static\nenvironments in order to draw inferences regarding navigation performance. These tasks\nare typically divided into categories (e.g., egocentric or allocentric) that reflect different\ntwo-systems theories. Unfortunately, this two-systems approach has been insufficient for\nreliably predicting navigation performance in virtual reality (VR). In the present\nexperiment, participants were asked to learn and navigate towards goal locations in a\nvirtual city and then perform eight simple spatial tasks in a separate environment.\nThese eight tasks were organised along four orthogonal dimensions (static\/dynamic,\nperceived\/remembered, egocentric\/allocentric, and distance\/direction). We employed\nconfirmatory and exploratory analyses in order to assess the relationship between\nnavigation performance and performances on these simple tasks. We provide evidence\nthat a dynamic task (i.e., intercepting a moving object) is capable of predicting\nnavigation performance in a familiar virtual environment better than several categories\nof static tasks. These results have important implications for studies on navigation in\nVR that tend to over-emphasise the role of spatial memory. Given that our dynamic\ntasks required efficient interaction with the human interface device (HID), they were\nmore closely aligned with the perceptuomotor processes associated with locomotion\nthan wayfinding. In the future, researchers should consider training participants on\nHIDs using a dynamic task prior to conducting a navigation experiment. Performances\non dynamic tasks should also be assessed in order to avoid confounding skill with an\nHID and spatial knowledge acquisition.