Visual, Auditory, And Kinesthetic Learners
The concept of visual, auditory, and kinesthetic learners is probably familiar to you. It states that each person has a preferred way of receiving new information, through one of three senses. Vision (seeing) and audition (hearing) are clear enough, but kinesthesia might require an explanation. Kinesthesia is the sensation that tells you where your body parts are. If you were to close your eyes and I moved your arm as though you were, say, waving, you would know where your arm was even though you couldn't see it. That information comes from special receptors in your joints, muscles, and skin. That's kinesthesia.
The visual-auditory-kinesthesia theory holds that everyone can take in new information through any of the three senses, but most of us have a preferred sense. When learning something new, visual types like to see diagrams, or even just to see in print the words that the teacher is saying. Auditory types prefer descriptions, usually verbal, to which they can listen. Kinesthetic learners like to manipulate objects physically; they move their bodies in order to learn (Figure 3).
To give you a backdrop against which to evaluate this theory, I'll start with a few facts about memory that cognitive scientists have worked out. People do differ in their visual and auditory memory abilities. That is, our memory system can store both what things look like and what they sound like. We use visual memory representations when we create a visual image in our mind's eye. For example, suppose I ask you, "What is the shape of a German shepherd's ears?" or "How many windows are there in your classroom?" Most people say they answer these questions by creating a visual image and inspecting it. A great deal of work by experimental psychologists during the 1970s showed that such images do have a lot of properties in common with vision—that is, there's a lot of overlap between your "mind's eye" and the parts of your brain that allow you to see. We also store some memories as sound, such as Katie Couric's voice, the roar of the MGM lion, or our mobile phone's ringtone. If I ask you, for example, "Who has a deeper voice: your principal or your superintendant?" you will likely try to imagine each person's voice and compare them. We can store both visual and auditory memories, and as with any other cognitive function, each of us varies in how effectively we do so. Some of us have very detailed and vivid visual and auditory memories; others of us do not.
Cognitive scientists have also shown, however, that we don't store all of our memories as sights or sounds. We also store memories in terms of what they mean to us. For example, if a friend tells you a bit of gossip about a coworker (who was seen coming out of an adult bookshop), you might retain the visual and auditory details of the story (for example, how the person telling the story looked and sounded), but you might remember only the content of the story (adult bookshop) without remembering any of the auditory or visual aspects of being told. Meaning has a life of its own, independent of sensory details (Figure 4).