We use words to communicate about things and kinds of things, their properties, relations and actions. Researchers are now creating robotic and simulated systems that ground language in machine perception and action, mirroring human abilities. A new kind of computational model is emerging from this work that bridges the symbolic realm of language with the physical realm of real-world referents.
It explains aspects of context-dependent shifts of word meaning that cannot easily be explained by purely symbolic models. An exciting implication for cognitive modeling is the use of grounded systems to ‘step into the shoes’ of humans by directly processing first-person perspective sensory data, providing a new methodology for testing various hypotheses of situated communication and learning.