Embodied cognition is a little difficult to define, so I’m going to do it mostly by way of examples. In short though it is the view that our bodies – the shapes they are in , the movements they make, the way we use our motor systems – contribute to and even form the basis of our cognition. The more ‘traditional’ way of looking at things is that of course our minds and our cognitive processes affect our bodies and motor systems, but not the other way around. Well, there is a lot of evidence to support embodied cognition, and it is incredibly important for designers to understand how it all works, with virtual reality and wearable technology now a rapidly growing area of product development.

Cognitive Scientists and Psychologists have devised clever experiments to tease out the insights that provide evidence for embodied cognition. They often involve forcing experimental subjects to use their bodies/musculature in different ways whilst performing cognitive tasks, and recording the differences that arise. An easy one to start: Researchers got people to either bite a pencil between their teeth, which forces the face roughly into a smile (it engages those smile muscles), or to grip the pencil between the nose and the upper lip, forcing a frown, or a ‘sad face’. When these different groups of people were given different sentences to read and understand the ‘smilers’ comprehended the pleasant sentences faster, and vice versa for the frowners. This means that whether or not you are actually happy or sad, merely using the muscles involved usually in the display of those emotions starts to affect your cognition as though you actually were.

The theory has been contributed to by Psychologists, Philosophers, Cognitive Scientists, Biologists, Neuroscientists, AI researchers and experts from other fields. It’s still young, and there’s a lot of growing research and early stage theorising.

One of the core ideas is that as we develop from birth, through childhood, the majority of our early problem solving behaviour takes place through our bodies – working out how to manipulate our muscles to grab a toy we like, or managing to move our entire bodies across the floor to a parent, and so on. This gradually gets more developed, but still these movements and ‘positionings’ of the body form a complex network, or bank of cognitive structures that we later rely on for thinking, and what we usually consider to be ‘mental’ problem solving.

I’ll be writing a lot more about this, but a couple of things to ponder until then: Firstly the importance of embodied cognition for wearable technology and virtual reality. How can we design wearable technology that either makes us use our bodies in ways that enhance or positively affect cognition, or at least how do we make sure that we don’t design wearable technology that affects users bodies and therefore their cognition in unwanted ways? When we design VR interfaces, or interfaces that are projected into the user’s visual field, we need to take into account not just how their visual system will process the interface, but also how the presence of that interface will influence how they use their bodies, and thus how it will affect the way they think and perceive as a result.

Secondly, consider what this means for Artificial Intelligence. The mainstream of AI has been working on the premise that it is by either replicating the brain, or parts of the brain, or by creating an artificial but analogous ‘computer’ brain, that we will achieve true artificial intelligence. The theory of embodied cognition however, at least a strong version, might claim that intelligence and cognition is impossible without embodiment. In other words, it is impossible for something, like a computer, to really think and experience like a person, without being in a body and having senses. This seems to make a strong case for increasing the involvement of robotics, in the study and development of AI.