Monday, June 15, 2015
Generally speaking, you cannot learn from sounds of new information while you sleep, though this was a fad several decades ago. But in an earlier post, I discussed a new line of research where sleep learning can occur. The key is to play sound cues that were associated with learning that occurred during the previous wakefulness period. The explanation I posted was that cue-dependent sleep learning can work because a normal function of sleep is to strengthen memories of new information and that presenting relevant cues during sleep increases the retrieval of these memories and makes them accessible for rehearsal and strengthening.
The latest experiment by a different group shows that this cuing during sleep can modify bad attitudes and habits. The test involved counter stereotype-training of certain biased attitudes during wakefulness, and investigators reactivated that counter-training during sleep by playing a sound cue that had been associated with the wakefulness training.
In the experiment, before a 90-minute nap 40 white males and females were trained to counter their existing gender and racial biases by counter-training. A formal surveyed allowed quantification of each person's level of gender or racial bias before and after counter-training. For example, one bias was that females are not good at math. Subjects were conditioned to have a more favorable attitude about women and math with counter-training that repeatedly associated female faces with science-related words. Similarly, racial bias toward blacks was countered by associating black faces with highly positive words. In each training situation, whenever the subject saw a pairing that was incompatible with their existing bias they pressed a "correct" button, which yielded a confirmatory sound tone that was unique for each bias condition. Subjects were immediately tested for their learning by showing a face (female or black) and the counter-training cue, whereupon they were to drag the appropriate bias-free face on to a screen with the positive word. For example, if the first test screen was that of a woman, accompanied by the sound cue, the subject dragged a woman's face onto a second screen that said "good at math." Results revealed that this conditioning worked: both kinds of bias were reduced immediately after counter-conditioning.
Then during the nap, as soon as EEG signs indicated the presence of deep sleep, the appropriate sound cue was played repeatedly to reactivate the prior learning. When subjects re-took the bias survey a week later, the social bias was reduced in the sound-cued group, but not in the control group that was trained without sound cues.
Experimenters noted that the long-term improvement of bias was associated with rapid-eye-movement (REM) (dream) sleep which often followed the deep sleep during early stages of the nap. That is, the beneficial effect was proportional to the amount of nap time spent in both slow-wave sleep and REM sleep, not either alone. It may be that memories are reactivated by cuing during deep (slow-wave) sleep, but that the actual cell-level storage of memory is provided by REM sleep.
Implications of this approach to enhancing learning and memory show a great deal of promise. Can it be used for enhancing learning in school? Can it be used in rehabilitation of addicts or criminals? But there is a dark side. Now might be a good time to re-read Huxley's Brave New World wherein he actually described conditioning values in young children while they slept. Sleep is a state where people are mentally vulnerable and without conscious control over their thoughts. Malevolent people could impose this kind of conditioning and memory enhancement on others for nefarious purposes. These techniques may have valid social engineering applications, but they must be guided by ethical considerations.
Dr. Klemm is author of Memory Power 101 (Skyhorse), Better Grades, Less Effort (Benecton), and Mental Biology (Prometheus).
Klemm, W. R. (2013). New discoveries on optimizing femory formation. http://thankyoubrain.blogspot.com/2013/05/new-discoveries-on-optimizing-memory.html
Hu, Xiaoqing et al. (2015. Unlearning implicit social biases during sleep. Science. 348(6238), 1013-1015.
Wednesday, June 03, 2015
The three most important times for learning are: Before, During, and (soon) After.
1. Bring your “A game.” Choose to be positive and interested. Being bored is a choice— a self-defeating choice.
2. Check your foundation. Come prepared.
3. Expect to remember.
4. Pay Attention. Ask questions.
5. Take good notes.
7. Avoid mental interference. Use quiet, uninterrupted reflection during rehearsal.
8. Apply what you just learned
9. Self- test. Really test, don't just "look over." Repeat several times in the next hours and days.
"Memory Medic" is author of Memory Power 101 and Better Grades, Less Effort. Both are available at Amazon.com.
Saturday, May 30, 2015
In the previous post, Decision-making 101, I provided evidence that selective attention to items that were retrieved into working memory were a major factor in making good decisions. This has generally unrecognized educational significance. Rarely is instructional material packaged with foreknowledge of how it can be optimized in terms of reducing the working memory cognitive load. New research from a cognitive neuroscience group in the U.K. is demonstrating the particular importance this has for learning how to correctly categorize new learning material. They show that learning is more effective when the instruction is optimized ("idealized" in their terminology).
Decisions often require categorizing novel stimuli, such as normal/abnormal, friend/foe, helpful/harmful, right/wrong or even assignment to one of multiple category options. Teaching students how to make correct category assignments is typically based on showing them examples for each category. Categorization issues routinely arise when learning is tested. For example, the common multiple-choice testing in schools requires that a decision be made on each potential answer as right or wrong.
In reviewing the literature on optimizing training, these investigators found reports that one approach that works is to present training in a specific order. For example, in teaching students how to classify by category, people perform better when a number of examples from one category are presented together followed by a number of contrasting examples from the other category. Other ordering manipulations are learned better if simple, unambiguous cases in either category are presented together early in training, while the harder, more confusing cases are presented afterwards. Such training strengthens the contrast between the two categories.
The British group has focused on the role of working memory in learning. Their idea is that ambiguity during learning is a problem. In real-world situations that require correct category identification, naturally occurring ambiguities make correct decisions difficult. Think of these ambiguities as cognitive "noise" that interferes with the training that is recalled into working memory. This noise clutters the encoding during learning and clutters the thinking process and impairs the rigorous thought processes that may be needed to make a correct distinction. In the real world of youngsters in school, other major cognitive noise sources are the task-irrelevant stimuli that come from multi-tasking habits so common in today's students.
The theory is that when performing a learned task, the student recalls what has been taught into working memory. Working memory has very limited capacity, so any "noise" associated with the initial learning may be incompletely encoded and the remembered noise may also complicate the thinking required to perform correctly. Thus, simplifying learning material should reduce remembered ambiguities, lower the working memory load, and enable better reasoning and test performance.
One example of optimizing learning is the study by Hornsby and Love (2014) who applied the concept to training people with no prior medical training to decide whether a given mammogram was normal or cancerous. They hypothesized that learning would be more efficient if students were trained on mammograms that were easily identified as normal or cancerous, and did not include examples where the distinction was not so obvious. The underlying premise is that decision-making involves recalling past remembered examples into working memory and accumulating the evidence for the appropriate category. If the remembered items are noisy (i.e. ambiguous) the noise also accumulates and makes the decision more difficult. Thus, learners will have more difficulty if they are trained on examples across the whole range of possibilities from clearly evident to obscure than if they were separately trained on examples that were clearly evident as belong into one category or another.
Initially a group of learners was trained on a full-range mixture of mammograms so the images could be classified by diagnostic difficulty as easy or hard or in between. On each trial, three mammograms were shown: the left image was normal, the right was cancerous, and the middle was the test item requiring a diagnosis of whether it was normal or cancerous.
In the actual experiment, one student group was trained to classify a representative set of easy, medium, and hard images, while the other group was trained only on easy samples. During training trials, learners looked at the three mammograms, stated their diagnosis for the middle image, and were then given feedback as to whether they were right or wrong. After completing all 324 training trials, participants completed 18 test trials, which consisted of three previously unseen easy, medium and hard items from each category displayed in a random order. Test trials followed the same procedure as training trials.
When both groups were tested on samples across the range in both conditions, the optimized group was better able to distinguish normal from cancerous mammograms in both the easy and medium images. Note that the optimized group was not trained on medium images. However, no advantage was found in the case of hard test items; both groups made many errors on the hard cases, and optimized training yielded poorer results than regular training.
We need to explain why this strategy does not seem to work on hard cases. I suspect that in easy and medium cases, not much understanding is required. It is just a matter of pattern recognition, made easier because the training was more straightforward and less ambiguous. The learner is just making casual visual associations. For hard cases, a learner must know and understand the criteria needed to make distinctions. The subtle differences go unrealized if diagnostic criteria are not made explicit in the training. In actual medical practice, many mammograms actually cannot be distinguished by visual inspection—they really are hard. Other diagnostic tests are needed.
The basic premise of such research is that learning objects or task should be pared down to the basics, eliminating extraneous and ambiguous information, which constitute “noise” that confounds the ability to make correct categorizations.
In common learning situations, a major source of noise is extraneous information, such as marginally relevant detail. Reducing this noise is achieved by focus on the underlying principle. Actually I stumbled on this basic premise of simplification over 50 years ago when I was a student trying to optimize my own learning. What I realized was the importance of homing in on the basic principle of what I was trying to learn from instructional material. If I understood a principle, I could use that understanding to think through to many of the implications and applications.
In other words, the principle is: "don't memorize any more than you have to." Use the principles as a way to figure out what was not memorized. Once core principles are understood, much of the basic information can be deduced or easily learned. This is akin to the standard practice of moving from the general to the specific. Even so, general ideas should emphasize principles.
Textbooks are sometimes quite poor in this regard. Too many texts have so much ancillary information in them that they should be thought of as reference books. That is why I have found a good market for my college-level neuroscience electronic textbook, “Core Ideas in Neuroscience,” in which each 2-3 page chapter is based entirely on each of the 75 core principles that cover the broad span of membrane biochemistry to human cognition.. A typical neuroscience textbook by other authors can run up to 1,500 pages.
Hornsby, Adam, and Love, B. C. (2014). Improved classification of mammograms following idealized training. J. Appl. Res. Memory and Cognition. 3(2):72-76.
Dr. Klemm is a Senior Professor of Neuroscience at Texas A&M. His latest books are Memory Power 101, (Skyhorse) and Mental Biology (Prometheus). He also writes learning and memory blogs for Psychology Today magazine and his own site at thankyoubrain.blogspot.com. His posts have nearly 1.5 million reader views.
Thursday, May 21, 2015
Teenagers are notorious for poor decision-making. Of course that is inevitable, given that their brains are still developing, and they have had relatively little life experience to show them how to predict what works and what doesn’t. Unfortunately, what doesn’t work may have more emotional appeal, and most of us at any age are more susceptible to our emotions than cold, hard logic.
Seniors also are prone to poor decision-making if senility has set it. Unscrupulous people take advantage of such seniors because a brain that is deteriorating has a hard time making wise decisions.
In between teenage and senility is when the brain is at its peak for good decision making. Wisdom comes with age, up to a point. Some Eastern cultures venerate their old people as generally being especially wise. After all, it you live long enough, and are still mentally healthy, you ought to make good decisions because you have a lifetime of experience to teach you what future choices are likely to work and which are not.
Much of that knowledge comes from learning from one’s mistakes. On the other hand, some people, regardless of age, can’t seem to learn from their mistakes. Most of the time the problem is not stupidity but a flawed habitual process by which one is motivated to make wise decisions and evaluate options. Best of all is learning from somebody else’s mistakes, so you don’t have to make them yourself.
Learning from your mistakes can be negative, if you fret about it. Learning what you can do to avoid repeating a mistake is one thing, but dwelling on it erodes one’s confidence and sense of self worth. I can never forget the good advice I read from, of all people, T. Boone Pickens. He was quoted in an interview as saying that he was able to re-make his fortune on multiple occasions because he didn’t dwell on losing the fortunes. He credited that attitude to his college basketball coach who told the team after each defeat, “Learn from your mistakes, but don’t dwell on them. Learn from what you did right and do more of that.”
It would help if we knew how the brain made decisions, so we could train it to operate better. “Decision neuroscience” is an emerging field of study aimed at how learning how brains make decisions and how to optimize the process. Neuroscientists seemed to have honed in on two theories, both of which deal with how the brain handles the processing of alternate options to arrive at a decision.
One theory is that each option is processed in its own competing pool of neurons. As processing evolves, the activity in each pool builds up and down as each pool competes for dominance. At some point, activity builds up in one of the pools to reach a threshold, in winner-take-all fashion, to allow the activity in that pool to dominate and issue the appropriate decision commands to the parts of the brain needed for execution. As one possible example, two or more pools of neurons separately receive input that reflects the representation of different options. Each pool sends an output to another set of neurons that feed back either excitatory or inhibitory influences, thus providing a way for competition among pools to select the pool that eventually dominates because it has built up more impulse activity than the others.
The other theory is based on guided gating wherein input to pools of decision-making neurons is gated to regulate how much excitatory influence can accumulate in each given pool. [i] The specific routing paths involve inhibitory neurons that shut down certain routes, thus preferentially routing input to a preferred accumulating circuit. The route is biased by estimated salience of each option, current emotional state, memories of past learning, and the expected reward value for the outcome of each option.
These decision-making possibilities involve what is called “integrate and fire.” That is, input to all relevant pools of neurons accumulates and leads to various levels of firing in each pool. The pool firing the most is most likely to dominate the output, that is, the decision.
However circuits make decisions, there is considerable evidence that nerve impulse representations for each given choice option simultaneously code for expected outcome and reward value. These value estimates update on the fly.[ii] Networks containing these representations compete to arrive at a decision.
Any choice among alternative options is affected by how much information for each option the brain has to work on. When the brain is consciously trying to make a decision, this often means how much relevant information the brain can hold in working memory. Working memory is notoriously low-capacity, so the key becomes remembering the sub-sets of information that are the most relevant to each option. Humans think with what is in their working memory. Experiments have shown that older people are more likely to hold the most useful information in working memory, and therefore they can think more effectively. The National Institute of Aging began funding decision-making research in 2010 at Stanford University’s Center on Longevity. Results of their research are showing that older people often make better decisions than younger people.
As one example, older people are more likely to make rational cost-benefit analyses. Older people are more likely to recognize when they have made a bad investment and walk away rather than throwing more good money after bad.
A key factor seems to be that older people are more selective about what they remember. For example, one study from the Stanford Center compared the ability of young and old people to remember a list of words. Not surprisingly, younger people remembered more words, but when words were assigned a number value, with some words being more valuable than others, older people were better at remembering high-value words and ignoring low-value words. It seems that older people selectively remember what is important, which should make it easier to make better decisions.
Decision-making skills are important of learning achievement in school. Students need to know how to focus in general, and focus on what is most relevant in particular. They are not learning that skill, and their multi-tasking culture is teaching them many bad habits.
Those of us who care deeply about educational development of youngsters need to push our schools to address the thinking and learning skills of students. "Teaching to the test" detracts from time spent in teaching what matters most. Today's culture of multi-tasking is making matters worse. Children don't learn how to attend selectively and intensely to the most relevant information, because they are distracted by superficial attention to everything. Despite their daily use of Apple computers and smart phones, only one college student out of 85 could draw the Apple logo correctly.[iii]
Memory training is generally absent from teacher training programs. Despite my locally well-publicized experience in memory training, no one in the College of Education at my university has ever asked me to share my knowledge with their faculty or with pre-service teachers. The paradox is that teachers are trained to help students remember curricular answers for high-stakes tests. What could be more important than learning how to decide what to remember and how to remember it? And we wonder why student performance is so poor?
"Memory Medic" is author of Memory Power 101 (Skyhorse) and Better Grades, Less Effort (Benecton).
[i]Purcell, B. A.; Heitz, R. P.; Cohen, J. Y.; Schall, et al. (2010). Neurally constrained modeling of perceptual decision making. Psychological Review, 117(4), 1113-1143.
[ii] McCoy, A. N., and Platt, M. L. (2005). Expectations and outcomes: decision-making in the primate brain. J. Comp. Physiol A 191, 201-211.
[iii] Blake, Adam B., Nazarian, Meenely, and Castel, Alan D. (2015). The Apple of the mind's eye: Everyday attention, metamemory, and reconstructive memory for the Apple logo. The Quarterly Journal of Experimental Psychology, 2015; 1 DOI: 10.1080/17470218.2014.1002798
Thursday, April 30, 2015
Whether the music is orchestral, rock, country, or jazz, most seniors like to listen to some kind of music. Music can soothe or energize, make us happy or sad, but the kind we like to hear does something that can be positively reinforcing or otherwise we would not listen to it. As my 80-year-old jazz trumpeter friend, Richard Phelps, recently said at his birthday party, "Where there is life there is music. Where there is music, there is life."
Relatively little research has been done on the effects of music on brain function in older people. But one study recently reported the effects in older adults of background music on brain processing speed and two kinds of memory (episodic and semantic). The subjects were not musicians and had an average age of 69 years.
The music test conditions were: 1) no music control, 2) white noise control, 3) a Mozart recording, and 4) a Mahler recording. All 65 subjects were tested in counter-balanced order in all four categories. The music was played at modest volume as background before and during performance of the cognitive tasks, a mental processing speed task and the two memory tasks. The episodic memory task involved trying to recall a list of 15 words immediately after a two-minute study period. The semantic memory task involved word fluency in which subjects wrote as many words as they could think of beginning with three letters of the alphabet.
Processing speed performance was faster while listening to Mozart than with the Mahler or white noise conditions. No improvement in the Mahler condition was seen over white noise or no music.
Episodic memory performance was better when listening to either type of music thatn while hearing white noise or no music. No difference was noted between the two types of music.
Semantic memory was better for both kinds of music than with white noise and better with Mozart that with no music.
Recognizing that emotions could be a relevant factor, the experimenters analyzed a mood questionnaire comparing the two music conditions with white noise. Mozart generated higher happiness indicators than did Mahler or white noise. Mahler was rated more sad than Mozart and comparable to white noise.
Thus, happy, but not sad, music correlated with increased processing speed. The researchers speculated that happy subjects were more around and alert.
Surprisingly, both happy and sad music enhanced both kinds of memory over the white noise or silence condition. But it is not clear if this observation is generally applicable. The authors did mention without emphasis that the both kinds of music were instrumental and lacked loudness or lyrics that could have been distracting and thus impair memory. I think this point is substantial. When lyrics are present, the brain is dragged into trying to hear the words and thinking about their meaning. These thought processes would surely interfere with trying to memorize new information or recall previous learned material.
A point not considered at all is personal preference for a certain types of music. There are people who don't like classical music, and the data in this study could have been made "noisy" if enough of the 65 people disliked classical music and were actually distracted by it. In other words, the effects noted in this study might have been magnified if the subjects were allowed to hear their preferred music.
My take-home lesson was actually formed over five decades ago when I listed to jazz records while plowing my way through memorizing a veterinary medical curriculum. Then, I thought that the benefit was stress reduction (veterinary school IS stressful and happy jazz certainly reduces stress). Now perhaps I see that frequent listening to music that was pleasurable for me might have actually helped my memory capability. If you still have doubts you might want to check my latest blog post, "Happy thoughts can make you more competent" (http://thankyoubrain.blogspot.com/2015/01/happy-thoughts-can-make-you-more.html).
Anyway, now that I am in the elderly category, I see there is still reason to listen to the music I like. Music can be therapy for old age.
“People haven't always been there for me but music always has.”
"Memory Medic's" latest book is "Improve Your Memory for a Healthy Brain. Memory Is the Canary in Your Brain's Coal Mine." It is available in inexpensive e-book form at Amazon or in all formats at Smashwords.com.
Bottiroli, Sara et al. (2014). The cognitive effects of listening to background music on older adults: processing speed improves with upbeat music, while memory seems to benefit from both upbeat and downbeat music. Frontiers in Aging Neuroscience. Oct. 15. doi: 10.3389/fnagi.2014.00284.
Saturday, April 25, 2015
We have all been told by teachers that learning occurs best when we spread it out over time, rather than trying to cram everything into our memory banks at one time. But what is the optimal spacing? There is no general consensus.
However we do know that immediately after a learning experience the memory of the event is extremely volatile and easily lost. It's like looking up a number in the phone book: if you think about something else at the same time you may have to look the number up again before you can dial it. School settings commonly create this problem. One learning object may be immediately followed by another, and the succession of such new information tends to erase the memory of the preceding ones.
Memory researchers have known for a long time that repeated retrieval enhances long-term retention. This happens because each time we retrieve a memory, it has to be reconsolidated and each such reconsolidation strengthens the memory. Though optimal spacing intervals have not been identified, research confirms the importance of spaced retrieval. No doubt, the nature of the information, the effectiveness of initial encoding, competing experiences, and individual variability affect the optimal interval for spaced learning.
One study revealed that repeated retrieval of learned information (100 Swahili–English word pairs) with long intervals produced a 200% improvement in long-term retention relative to repeated retrieval with no spacing between tests. Investigators compared different-length intervals of 15, 30, or 90 minute spacing that expanded (for example, 15-30-45 min), stayed the same (30-30-30 min) or contracted (45-30-15 min) revealed that no one relative spacing interval pattern was superior to any other.
Another study has revealed that the optimally efficient gap between study sessions depends on when the information will be tested in the future. A very comprehensive study of this matter in 1,350 individuals involved teaching them a set of facts and then testing them for long-term retention after 3.5 months. A final test was given at a further delay of up to one year. At any test delay, increasing the inter-study gap between the first learning and a study of that material at first increased and then gradually reduced final test performance. Expressed as a ratio, the optimal gap equaled 10-20% of the test delay. That is, for example, a one-day gap was best for a test to be given seven days later, while a 21-day gap was best for a test 70 days later. Few of any teachers or students know this, and their study times are rarely scheduled in any systematic way, typically being driven by test schedules for other subjects, convenience, or even the teacher's whim.
The bottom line: the optimal time to review a newly learned experience is just before you are about to forget it. Obviously, we usually don't know when this occurs, but in general the vast bulk of forgetting occurs within the first day after learning. As a rule of thumb, you can suspect that a few repetitions early on should be helpful in fully encoding the information and initiating a robust consolidation process. So, for example, after each class a student should quickly remind herself what was just learned—then that evening do another quick review. Before the next class on that subject, the student should review again. Teachers help this process by linking the next lesson to the preceding one.
Certain practices will reduce the amount of time needed for study and the degree of long-term memory formation. These include:
• Don't procrastinate. Do it now!
• Organize the information in ways that make sense (outlines, concept maps)
• Identify what needs to be memorized and what does not.
• Focus. Do not multi-task. No music, cell phones, TV or radio, or distractions of any kind.
• Association the new with things you already know.
• Associate words with mental images and link images to locations, or in story chains
• Think hard about the information, in different contexts
• Study small chunks of material, in short intervals. Then take a mental break.
• Say out loud what you are trying to remember.
• Practice soon after learning and frequently thereafter at spaced intervals.
• Explain what you are learning to somebody else. Work with study groups later.
• Self-test. Don't just "look over" the material. Truly engage with it.
• Never, never, ever CRAM!
Friday, April 03, 2015
I read a lot about educational theory and research so that I can share "best practices" for better ways to teach and learn with my readers. Shared here are the ideas in a most informed and intelligent article on learning, written by Catherine L'Ecuyer, a Canadian lawyer with an MBA now living in Barcelona, Spain. The article explains the fundamental importance for motivating children to learn: the sense of wonder.
This notion resonated with me, because I know it to be true from personal experience. To this
Sauntering to school, I became entranced with all the new sights and sounds, stopping several times along the way to savor a new experience. A vivid memory was my stop at a beautiful flower I had never seen before. I physically probed the bloom, astonished at the elegant expression of nature. On that day, the prospect of school was a most joyous opportunity. It did not take long for school's pedantic nature, drills, and drudgery to squelch my sense of wonder. It was only in late middle school that my sense of wonder was resurrected, and that only occurred because I had a crush on my teacher and wanted to impress her with my learning. For many children, their inherent sense of wonder that school stamps out never returns.
Clearly, a child's state of mind affects how learning and the school environment are regarded. Among the more relevant states is stimulus seeking. That is why for example I wanted to explore the innards of that beautiful flower. Then too, there is the basic human responsiveness to positive reinforcement. If a learning experience is perceived as wondrous, it is perceived as good and beneficial, serving as incentive to have other such learning experiences. It obviously helps for a child to be aware of such perceptions.
L'Ecuyer adds the sense of wonder to the list of fundamentals of the motivation to learn. She makes the point that wonder is innate in children, especially when they are young. As a child matures, much of this sense of wonder can fade. For some people, the more they learn, the less wondrous the world seems. Among adults, scientists seem to be an exception (to a biologist, pond scum is beautiful and wondrous).
L'Ecuyer argues that modern educational paradigms are behaviorist and conflict with nurturing the sense of wonder in children. By behaviorist, she means that the guiding principle of teaching is that the environment directs learning with its emphasis on teachers, curriculum, and high-stakes testing. The popular mantra is that learning is better when it is provided earlier and in abundance. Curricula are designed to bombard students with information and testing. Do we really think that is motivating?
The problem is that children can be overwhelmed by too much too soon. Yet government policy increasingly advocates pre-kindergarten. Young developing brains do not respond well to too much stimulus, too much curriculum, and too much high-stakes testing. Children become preconditioned to expect high levels of stimulation, leading to attentiveness disorders. Children become passive and bored. The associated loss of the sense of wonder diminishes a child's motivation to cope with all this stimulus and pressure.
L'Ecuyer cites convincing research showing that compared to adults children learn at a slower pace than adults. They need more calm and silence. They are more intrigued by mystery. They need to trust in a human attachment figure, most commonly a caring mother. Unfortunately, our educational culture assumes that we don't teach enough curriculum and don't demand enough of children. Children learn to pass tests, not love learning. Our multi-tasking culture only adds to sensory and cognitive overload that interferes with learning and mental performance in general. The family breakdown in our culture diminishes a child's trust in primary caregivers and degrades attachment to them. Schools cannot provide such trust and attachment. Nor can pre-kindergarten or day-care.
These are basic reasons why I push for a reform in education that stresses teaching learning skills to young children, as opposed to the domination of traditional curriculum and excessive high-stakes testing. I am writing such a book now. When children have good learning skills, learning stops being an onerous chore. The "Learning Skills Cycle" that I advocate begins with motivation, and motivation begins with a sense of wonder.
Educational policy makers seem confused about why so many students fall behind. Every year, over 1.2 million students drop out of high school in the United States. That’s a student every 26 seconds – or 7,000 a day. About 25% of high school freshmen fail to graduate from high school on time. At the college level, only 59% of full-time four-year college students graduate within six years. Most college data use a six-year limit because so many college students can't finish in the usual four years.
Over the last 40 years, all the educational fads we have tried apparently do not work. In this time we have had such high-profile government initiatives as "Goals 2000, New Math, Nation at Risk, No Child Left Behind, Race to the Top, Common Core, Next Generation Science Standards, Charter Schools, and Head Start." Where is the evidence that any of this works? SAT scores have not improved, even declining in some years, while funding for educational has increased dramatically, on the order depending on the state of 200%.
Despite much ballyhoo and funding, Head Start's effects wash out within a few years. Nonetheless, many states think Head Start did not start early enough and that what is needed is government funded pre-kindergarten. Nobody considers what this too-soon, too-much, too stressful education does to childhood sense of wonder and motivation to learn.
The key question asked by L'Ecuyer is this: Are today's educational paradigms and policies promoting the sense of wonder and motivation to learn or squelching it? While all our government programs to improve education seem valuable, the results say otherwise. Teaching is now driven by high-stakes testing. While accountability is necessary, when high-stakes testing becomes the focus of education, it poisons the learning atmosphere. The law of unintended consequences applies. Today's educational environment suffocates the wonder and love of learning for its own sake.
# # #
Dr. Klemm is author of two books on learning: Memory Power 101 and Better Grades, Less Effort. Reviews and information can be found at his web site, WRKlemm.com
 L'Ecuyer, Catherine. 2014. The wonder approach to learning. Frontiers in Human Neuroscience. Volume 8, October 6. doi:10.3389/fnhum.2014.00764.
 Klemm, W. R. 2014. Shift Away from Teaching to the Test: A Better Way to Improve Test Scores. The STATellite, 59 (1): 10-13. http://c.ymcdn.com/sites/www.statweb.org/resource/collection/ED7F18DF-2934-4034-BDF2-CF81CADE155A/WinterSTATellite2014.pdf
Wednesday, March 11, 2015
My last column on "Blaming the Victim" was a departure from my usual emphasis on improving learning and memory. But it did set the stage for this current post on the crippling effect of allowing children to make excuses for underperformance in school.
Most of us know how common it is for kids to make excuses ("the dog ate my homework" syndrome). When we adults were young, we also probably made excuses, blaming the textbook, the teacher, the school, and whatever else could serve to avoid facing the real causes of the problems.
Why do kids do that? The main reason is their fragile egos. Confronting personal weakness is especially hard for kids when they are embedded in an adult culture that inevitably reminds them that they are relatively powerless kids.
I remember a recent dinner-table conversation with my competitive 6th grade granddaughter, who was complaining about a test in which some of the questions were not aligned well with the instruction, which itself was deemed confusing. I said, "I understand that others did do better than you on the test. Wasn't everybody facing the same handicap?" No answer. Then I added, "It doesn't matter who the teacher is or what instruction you get. If you are not first in the class, it is your fault." Again, no response.
One approach that parents and teachers use is to bolster children's egos by praising them richly and often. Too much of a good thing is a bad thing. Too much praise makes kids narcissistic. Anybody who is not aware of the raging narcissism in today's youngsters must not be around young people very much. The most obvious sign is the compulsive checking of e-mail and texting, all in an effort by a child to be at the center of attention.
I and other professors notice narcissism in college students. In a selective college, most students think they are "A" students, and because of low standards in secondary school and grade inflation they are actually told they are A students. If they don't make As in college, it is somebody else's fault (usually the professor).
Scholars are beginning to address this growing narcissism. Eddie Brummelman at the University of Amsterdam in the Netherlands and his colleagues studied 565 children between the ages of 7 to 12. They picked this age group because most other such studies have been in adults, and they believed that early adolescence is when children develop narcissistic traits such as selfishness, self-centeredness and vanity.
Over 18 months, the children and their parents were given several detailed questionnaires that were designed to measure narcissistic traits and parental behavior. There was a small but significant link at each stage between how much parents praised their children and how narcissistic the children were six months later. Because the effect was only small, it suggests that other things also make people selfish and self-centered. I suspect the effect is larger in the U.S.
Maybe school culture is part of the problem. As in Lake Woebegon, "all kids are above average." For brighter students, the instructional rigor is so low that these kids get a false sense of how smart they are and how easy it is to be an "A" student.
I suspect that another factor is that students are not taught enough about how to be realistically self-aware. They may not even know when they are making excuses unless adults call them on it. Too often, parents side with the student in criticizing a teacher when the real problem is with the child.
Some of the blame shifting comes from biology. It is in human nature to claim ownership of things we do that turn out well, but disown actions that yield negative consequences. Experiments support this conclusion. The most recent experiments had a primary focus on our sense of time in association with voluntary actions. The experimental design was based on prior evidence that the perceived estimate of time lag between when we do something and when we think we did it is an implicit index of our sense of ownership. Investigators asked people to press a key, which was followed a quarter of a second later by negative sounds of fear or disgust, positive sounds of achievement or amusement, or neutral sounds. The subjects were then asked to estimate when they had made the action and when they heard the sound. Timing estimation errors were easily measured by computer. Subjects sensed a longer time lag between their actions and the consequences when the outcome (the sound) was negative than when it was positive.
Teaching Kids to Deal with Failure
There is a common denominator to most self-limiting styles of living. It is a fear of failure. Children express this fear by making excuses, which has the unintended effect of blocking the path to success. Excuses may provide immediate relief of anxiety, but it creates a self-limiting learning style that assures continued underachievement.
Whatever one’s station in life, one axiom is paramount: for things to get better for you, you have to get better. This point is well illustrated in an inspiring rags-to-riches success book by A. J. Williams. He points out that a main reason that people do not make the changes they need to is that they are afraid of failure. But, paradoxically, learning from failure is how many people turn their lives around and become happier. Children, I have noticed, are highly resistant to personal change, maybe more so than adults. I am dismayed at how often I show children how to memorize more effectively and they just can't bring themselves to study in a different way. It is as if they don't believe me enough to even try new approaches. Or maybe they have convinced themselves they are mediocre and need the shield of excuses to keep others from detecting their weaknesses.
Louis Armstrong, the famous trumpeter, told an instructive story about fear when he was a boy. One day when his mother asked him to go down to the levee to fetch a pail of drinking water, he came back home with an empty pail. Upon noticing the empty pail, his mother said, “I told you to bring back a pail of water for us to drink. How come your pail is empty?” Louis replied, “There’s an alligator there, and I was scared to death.” His mother then said, “You shouldn’t be afraid. That gator is as afraid of you as you are of him.” To which Louis answered, “If that’s the case, then that water ain’t fit to drink.”
If there is an alligator keeping you away from what you need to do, have faith you will prevail over your demons. But as long as a child lets fear get in the way, her pail will stay empty.
Other kinds of fear are also self-limiting. Many children fear commitment to learning. Commitment exacts an emotional price requiring dedication, passion, and self-discipline. Children fear confusion and difficulty. They fear disapproval.
Kids need to put their under-performance in perspective. Failure and under-achievement are not permanent. They are not pervasive reflections of inadequacy. Children can acquire learning skills that lead to success. Unfortunately, schools don't teach much about learning skills, being focused on teaching to high-stakes tests.
Kids need to recognize their weakness and strive to fix them. But to bolster their motivation and general attitude about school, they need to recognize what they have done well and strive to do even more of that. Dwelling on under-performance is counter-productive.
The Most Important Thing Kids Need to Learn
Excuse-making prevents a child from developing the attitude that will best serve them throughout life: a sense of personal efficacy, a state of perceived control over one's life. I explain this more thoroughly in my book, "Blame Game, How to Win It." But a summary here will have to suffice.
How children perceive their personal power determines how much effort they will expend to control their lives. If they lack a genuine sense of power, excuse-making applies salve to their wounded egos. Self-efficacy is not the same as self-esteem. Psychologist, Albert Bandura, puts it this way: “Perceived self-efficacy is concerned with judgments of personal capability, whereas self-esteem is concerned with judgments of self-worth.” Both are important for happiness, but it is perceived self-efficacy that drives academic achievement. One practical application where this distinction is apparently not recognized is with school teachers who think the cure for low achievement in school is to foster self-esteem. Teachers should emphasize self-efficacy. Children learn self-efficacy from teachers and parents who enable them to master their environment. Students who are filled with self-doubt do not put much effort into school work. They make excuses. As kids are progressively given the skills to achieve, they develop a sense of confidence in their ability to succeed, which will motivate them to strive for more achievement. When I was a kid, I only became a good student when I discovered, more or less by accident, that I could make good grades. Discovering that I could make good grades if I tried motivated me to do just that. This sense has to be earned. It does not come from excuses.
Brummelman, Eddie, et al. (2015). Origins of narcissism in children. Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.1420870112
Klemm, W. R. (2008). Blame Game. How To Win It. Bryan, Tx.: Benecton.
Yoshie, et al. (2013). Negative emotional outcomes attenuate sense of agency over voluntary actions. Current Biology. Dx.doi.org/10.1016/j.cub2013.08.034