Learning Representations Specialized in Spatial Knowledge: Leveraging Language and Vision

Guillem Collell, Marie-Francine Moens

Abstract


Spatial understanding is crucial in many real-world problems, yet little progress has been made towards building representations that capture spatial knowledge. Here, we move one step forward in this direction and learn such representations by leveraging a task consisting in predicting continuous 2D spatial arrangements of objects given object-relationship-object instances (e.g., ``cat under chair") and a simple neural network model that learns the task from annotated images. We show that the model succeeds in this task and that it is furthermore capable of predicting correct spatial arrangements for unseen objects if either CNN features or word embeddings of the objects are provided. The differences between visual and linguistic features are discussed. Next, to evaluate the spatial representations learned in the previous task, we introduce a task and a dataset consisting in a set of crowdsourced human ratings of spatial similarity for object pairs. We find that both CNN features and word embeddings predict well human judgments of similarity and that these vectors can be further specialized in spatial knowledge if we update them when training the model that predicts spatial arrangements of objects. Overall, this paper paves the way towards building distributed spatial representations, contributing to the understanding of spatial expressions in language.


Full Text:

PDF

Refbacks

  • There are currently no refbacks.


Copyright (c) 2018 Association for Computational Linguistics

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.