long-term potentiation

In neuroscience, long-term potentiation (LTP) is a persistent strengthening of synapses based on recent patterns of activity. These are patterns of synaptic activity that produce a long-lasting increase in signal transmission between twoneurons.[2] The opposite of LTP is long-term depression, which produces a long-lasting decrease in synaptic strength. It is one of several phenomena underlying […]

In neuroscience, long-term potentiation (LTP) is a persistent strengthening of synapses based on recent patterns of activity. These are patterns of synaptic activity that produce a long-lasting increase in signal transmission between twoneurons.[2] The opposite of LTP is long-term depression, which produces a long-lasting decrease in synaptic strength.

It is one of several phenomena underlying synaptic plasticity, the ability of chemical synapses to change their strength. As memories are thought to be encoded by modification of synaptic strength,[3] LTP is widely considered one of the major cellular mechanisms that underlies learning and memory.[2][3]

LTP was discovered in the rabbit hippocampus by Terje Lømo in 1966 and has remained a popular subject of research since. Many modern LTP studies seek to better understand its basic biology, while others aim to draw a causal link between LTP and behavioral learning. Still others try to develop methods, pharmacologic or otherwise, of enhancing LTP to improve learning and memory. LTP is also a subject of clinical research, for example, in the areas of Alzheimer’s disease andaddiction medicine.


Anki

Anki is a program which makes remembering things easy. Because it’s a lot more efficient than traditional study methods, you can either greatly decrease your time spent studying, or greatly increase the amount you learn. Anyone who needs to remember things in their daily life can benefit from Anki. Since it is content-agnostic and supports … Continue reading Anki

Anki is a program which makes remembering things easy. Because it’s a lot more efficient than traditional study methods, you can either greatly decrease your time spent studying, or greatly increase the amount you learn.

Anyone who needs to remember things in their daily life can benefit from Anki. Since it is content-agnostic and supports images, audio, videos and scientific markup (via LaTeX), the possibilities are endless.
For example:

  • Learning a language
  • Studying for medical and law exams
  • Memorizing people’s names and faces
  • Brushing up on geography
  • Mastering long poems
  • Even practicing guitar chords!

Memory

Hermann Ebbinghaus (January 24, 1850 – February 26, 1909) was a German psychologist who pioneered the experimental study of memory, and is known for his discovery of the forgetting curve and the spacing effect. He was also the first person to describe thelearning curve.[1] He was the father of the eminent neo-Kantian philosopher Julius Ebbinghaus.

Hermann Ebbinghaus (January 24, 1850 – February 26, 1909) was a German psychologist who pioneered the experimental study of memory, and is known for his discovery of the forgetting curve and the spacing effect. He was also the first person to describe thelearning curve.[1] He was the father of the eminent neo-Kantian philosopher Julius Ebbinghaus.


college students

August 25, 2014 by Anne Curzan Why I’m Asking You Not to Use Laptops   First, if you have your laptop open, it is almost impossible not to check email or briefly surf the Internet, even if you don’t mean … Continue reading

August 25, 2014 by

 

First, if you have your laptop open, it is almost impossible not to check email or briefly surf the Internet, even if you don’t mean to or have told yourself that you won’t. I have the same impulse if I have my laptop open in a meeting. The problem is that studies indicate that this kind of multitasking impairs learning; once we are on email/the web, we are no longer paying very good attention to what is happening in class.

A study that came out in June—and which got a lot of buzz in the mainstream press—suggests that taking notes by hand rather than typing them on a laptop improves comprehension of the material. While students taking notes on a laptop (and only taking notes—they were not allowed to multitask) wrote down more of the material covered in class, they were often typing what the instructor said verbatim, which seems to have led to less processing of the material. The students taking notes by hand had to do more synthesizing and condensing as they wrote because they could not get everything down. As a result, they learned the material better.* I think there is also something to the ease with which one can create visual connections on a handwritten page through arrows, flow charts, etc.


Unconscious Decision Making

Published on Jul 26, 2012 Instinct is the driving force behind human decision making. Irrationality must be recognized if we’re going to get beyond the risks of not being built as thinking machines, says David Ropeik. David P. Ropeik is … Continue reading

Published on Jul 26, 2012 Instinct is the driving force behind human decision making. Irrationality must be recognized if we’re going to get beyond the risks of not being built as thinking machines, says David Ropeik. David P. Ropeik is an international consultant, author, teacher, and speaker on risk perception and risk communication.[1] He is also creator and director of Improving Media Coverage of Risk, a training program for journalists. He is a regular contributor to Big Think,[2] Psychology Today,[3] Cognoscenti,[4] and the Huffington Post.[5] http://bigthink.com


Published on Nov 26, 2012 Animation describing the Universal Principles of Persuasion based on the research of Dr. Robert Cialdini, Professor Emeritus of Psychology and Marketing, Arizona State University. Dr. Robert Cialdini & Steve Martin are co-authors (together with Dr. Noah Goldstein) of the New York Times, Wall Street Journal and Business Week International Bestseller Yes! 50 Scientifically Proven Ways to be Persuasive. US Amazon http://tinyurl.com/afbam9g UK Amazon http://tinyurl.com/adxrp6c IAW USA: http://www.influenceatwork.com IAW UK: http://www.influenceatwork.co.uk/


Nobel Prize winning neuropsychiatrist Eric Kandel describes new research which hints at the possibility of a biological basis to the unconscious mind. Directed / Produced by Elizabeth Rodd and Jonathan Fowler

Eric Richard Kandel (born November 7, 1929) is an American neuropsychiatrist. He was a recipient of the 2000 Nobel Prize in Physiology or Medicine for his research on the physiological basis of memory storage in neurons. He shared the prize with Arvid Carlsson and Paul Greengard.

Kandel, who had studied psychoanalysis, wanted to understand how memory works. His mentor, Harry Grundfest, said, “If you want to understand the brain you’re going to have to take a reductionist approach, one cell at a time.” So Kandel studied the neural system of the sea slug Aplysia californica, which has large nerve cells amenable to experimental manipulation and is a member of the simplest group of animals known to be capable of learning.[1]

Starting in 1966 James Schwartz collaborated with Kandel on a biochemical analysis of changes in neurons associated with learning and memory storage. By this time it was known that long-term memory, unlike short-term memory, involved the synthesis of new proteins. By 1972 they had evidence that the second messenger molecule cyclic AMP (cAMP) was produced in Aplysia ganglia under conditions that cause short-term memory formation (sensitization). In 1974 Kandel moved his lab moved to Columbia University and became founding director of the Center for Neurobiology and Behavior. It was soon found that the neurotransmitter serotonin, acting to produce the second messenger cAMP, is involved in the molecular basis of sensitization of the gill-withdrawal reflex. By 1980, collaboration with Paul Greengard resulted in demonstration that cAMP-dependent protein kinase, also known as protein kinase A (PKA), acted in this biochemical pathway in response to elevated levels of cAMP. Steven Siegelbaum identified a potassium channel that could be regulated by PKA, coupling serotonin’s effects to altered synaptic electrophysiology. In 1983 Kandel helped form the Howard Hughes Medical Research Institute at Columbia devoted to molecular neural science. The Kandel lab then sought to identify proteins that had to be synthesized to convert short-term memories into long-lasting memories. One of the nuclear targets for PKA is the transcriptional control protein CREB (cAMP response element binding protein). In collaboration with David Glanzman and Craig Bailey, Kandel identified CREB as being a protein involved in long-term memory storage. One result of CREB activation is an increase in the number of synaptic connections. Thus, short-term memory had been linked to functional changes in existing synapses, while long-term memory was associated with a change in the number of synaptic connections. Some of the synaptic changes observed by Kandel’s laboratory provide examples of Hebbian learning. One article describes the role of Hebbian learning in the Aplysia siphon-withdrawal reflex.[4] The Kandel lab has also performed important experiments using transgenic mice as a system for investigating the molecular basis of memory storage in the vertebrate hippocampus.[5][6][7] Kandel’s original idea that learning mechanisms would be conserved between all animals has been confirmed. Neurotransmitters, second messenger systems, protein kinasesion channels, and transcription factors like CREB have been confirmed to function in both vertebrate and invertebrate learning and memory storage.[8][9]

Kandel is a professor of biochemistry and biophysics at the College of Physicians and Surgeons at Columbia University. He is a Senior Investigator in the Howard Hughes Medical Institute. He was also the founding director of the Center for Neurobiology and Behavior, which is now the Department of Neuroscience at Columbia University. Kandel’s popularized account chronicling his life and research, In Search of Memory: The Emergence of a New Science of Mind,[2] was awarded the 2006 Los Angeles Times Book Awardfor Science and Technology.


Learn every gesture and body language cue in one video. Eye, hand, leg, arm, and mouth gestures are completely covered. Gestures and Body Language Series Be an expert in body language. Applies to his and her body language. Article is here http://bit.ly/apSipQ


Hebbian learning

Hebbian theory is a theory in neuroscience which proposes an explanation for the adaptation of neurons in the brain during the learning process. It describes a basic mechanism for synaptic plasticity, where an increase in synaptic efficacy arises from the presynaptic cell’s repeated and persistent stimulation of the postsynaptic cell. Introduced by Donald Hebb in his 1949 book The Organization of Behavior,[1] the theory is also called Hebb’s rule, Hebb’s postulate, […]

Hebbian theory is a theory in neuroscience which proposes an explanation for the adaptation of neurons in the brain during the learning process. It describes a basic mechanism for synaptic plasticity, where an increase in synaptic efficacy arises from the presynaptic cell’s repeated and persistent stimulation of the postsynaptic cell. Introduced by Donald Hebb in his 1949 book The Organization of Behavior,[1] the theory is also called Hebb’s ruleHebb’s postulate, and cell assembly theory. Hebb states it as follows:

“Let us assume that the persistence or repetition of a reverberatory activity (or “trace”) tends to induce lasting cellular changes that add to its stability.… When anaxon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.”[1]

The theory is often summarized as “Cells that fire together, wire together”.[2] However, this summary should not be taken literally. Hebb emphasized that cell A needs to ‘take part in firing’ cell B, and such causality can only occur if cell A fires just before, not at the same time as, cell B. This important aspect of causation in Hebb’s work foreshadowed what we now know about spike-timing-dependent plasticity, which requires temporal precedence.[3] The theory attempts to explain associative or Hebbian learning, in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells, and provides a biological basis for errorless learning methods for education and memory rehabilitation.

Hebbian theory concerns how neurons might connect themselves to become engrams. Hebb’s theories on the form and function of cell assemblies can be understood from the following:

“The general idea is an old one, that any two cells or systems of cells that are repeatedly active at the same time will tend to become ‘associated’, so that activity in one facilitates activity in the other.” (Hebb 1949, p. 70)
“When one cell repeatedly assists in firing another, the axon of the first cell develops synaptic knobs (or enlarges them if they already exist) in contact with the soma of the second cell.” (Hebb 1949, p. 63)

Gordon Allport posits additional ideas regarding cell assembly theory and its role in forming engrams, along the lines of the concept of auto-association, described as follows:

“If the inputs to a system cause the same pattern of activity to occur repeatedly, the set of active elements constituting that pattern will become increasingly strongly interassociated. That is, each element will tend to turn on every other element and (with negative weights) to turn off the elements that do not form part of the pattern. To put it another way, the pattern as a whole will become ‘auto-associated’. We may call a learned (auto-associated) pattern an engram.” (Allport 1985, p. 44)

Hebbian theory has been the primary basis for the conventional view that when analyzed from a holistic level, engrams are neuronal nets or neural networks.

Work in the laboratory of Eric Kandel has provided evidence for the involvement of Hebbian learning mechanisms at synapses in the marine gastropod Aplysia californica.

Experiments on Hebbian synapse modification mechanisms at the central nervous system synapses of vertebrates are much more difficult to control than are experiments with the relatively simple peripheral nervous system synapses studied in marine invertebrates. Much of the work on long-lasting synaptic changes between vertebrate neurons (such aslong-term potentiation) involves the use of non-physiological experimental stimulation of brain cells. However, some of the physiologically relevant synapse modification mechanisms that have been studied in vertebrate brains do seem to be examples of Hebbian processes. One such study reviews results from experiments that indicate that long-lasting changes in synaptic strengths can be induced by physiologically relevant synaptic activity working through both Hebbian and non-Hebbian mechanisms.

Principles

From the point of view of artificial neurons and artificial neural networks, Hebb’s principle can be described as a method of determining how to alter the weights between model neurons. The weight between two neurons increases if the two neurons activate simultaneously—and reduces if they activate separately. Nodes that tend to be either both positive or both negative at the same time have strong positive weights, while those that tend to be opposite have strong negative weights.

The following is a formulaic description of Hebbian learning: (note that many other descriptions are possible)

\,w_{ij}=x_ix_j

where w_{ij}  is the weight of the connection from neuron  j  to neuron  i  and  x_i  the input for neuron  i . Note that this is pattern learning (weights updated after every training example). In a Hopfield network, connections w_{ij}  are set to zero if i=j  (no reflexive connections allowed). With binary neurons (activations either 0 or 1), connections would be set to 1 if the connected neurons have the same activation for a pattern.

Another formulaic description is:

w_{ij} = \frac{1}{p} \sum_{k=1}^p x_i^k x_j^k\, ,

where w_{ij}  is the weight of the connection from neuron  j  to neuron  i  p  is the number of training patterns, and x_{i}^k the  k th input for neuron  i . This is learning by epoch (weights updated after all the training examples are presented). Again, in a Hopfield network, connections w_{ij}  are set to zero if i=j  (no reflexive connections).

A variation of Hebbian learning that takes into account phenomena such as blocking and many other neural learning phenomena is the mathematical model of Harry KlopfKlopf’s model reproduces a great many biological phenomena, and is also simple to implement.

Generalization and stability

Hebb’s Rule is often generalized as

\,\Delta w_i = \eta x_i y,

or the change in the ith synaptic weight w_i is equal to a learning rate \eta times the ith input x_i times the postsynaptic response y. Often cited is the case of a linear neuron,

\,y = \sum_j w_j x_j,

and the previous section’s simplification takes both the learning rate and the input weights to be 1. This version of the rule is clearly unstable, as in any network with a dominant signal the synaptic weights will increase or decrease exponentially. However, it can be shown that for any neuron model, Hebb’s rule is unstable.[citation needed] Therefore, network models of neurons usually employ other learning theories such as BCM theoryOja’s rule,[4] or the Generalized Hebbian Algorithm.

Exceptions

Despite the common use of Hebbian models for LTP, there exists several exceptions to Hebb’s principles and examples that demonstrate some aspects of the theory are oversimplified. One of the most well-documented of these exceptions pertains to how synaptic modification may not simply occur only between activated neurons A and B, but to neighboring neurons as well.[5] This is due to how Hebbian modification depends on retrograde signaling in order to modify the presynaptic neuron.[6] The compound most commonly identified as fulfilling this retrograde transmitter role is nitric oxide, which, due to its high solubility and diffusibility, often exerts effects on nearby neurons.[7] This type of diffuse synaptic modification, known as volume learning, counters, or at least supplements, the traditional Hebbian model.[8]

Hebbian learning account of mirror neurons

Hebbian learning and what we know about spike timing dependent plasticity has also been used in an influential theory of how mirror neurons emerge.[9][10] Mirror neurons are neurons in that fire both when an individual performs an action and when the individual sees[11] or hears [12] another perform a similar action. The discovery of these neurons has been very influential in explaining how individuals make sense of the actions of others, by showing that when we perceive the actions of others, we activate the motor programs we would use to perform similar actions. The activation of these motor programs then adds information to the perception and help predict what the person will do next based on the perceiver’s own motor program. A challenge has been to explain how individuals come to have neurons that respond both while performing an action and while hearing or seeing another perform similar actions. Christian Keysers and David Perrett suggested that while an individual performs a particular action, the individual will see, hear and feel himself perform the action. These re-afferent sensory signals will trigger activity in neurons responding to the sight, sound and feel of the action. Because the activity of these sensory neurons will consistently overlap in time with those of the motor neurons that caused the action, Hebbian learning would predict that the synapses connecting neurons responding to the sight, sound and feel of an action and those of the neurons triggering the action should be potentiated. The same is true while people look at themselves in the mirror, hear themselves babble or are imitated by others. After repeated experience of this re-afference, the synapses connecting the sensory and motor representations of an action would be so strong, that the motor neurons would start firing to the sound or the vision of the action, and a mirror neuron would have been created. Evidence for that perspective comes from many experiments that show that motor programs can be triggered by novel auditory or visual stimuli after repeated pairing of the stimulus with the execution of the motor program (see [13] for a review of the evidence). For instance, people that have never played the piano do not activate brain regions involved in playing the piano when listening to piano music. Five hours of piano lesson, in which the participant is exposed to the sound of the piano each time he presses a key, suffices to later trigger activity in motor regions of the brain upon listening to piano music.[14] Consistent with the fact that spike timing dependent plasticity occurs only if the presynaptic neuron’s firing predicts the post-synaptic neuron’s firing,[15] the link between sensory stimuli and motor programs also only seem to be potentiated if the stimulus is contingent on the motor program.


the cone of experience

People remember 10%, 20%…Oh Really? Publication Note This article was originally published on the Work-Learning Research website (www.work-learning.com) in 2002. It may have had some minor changes since then. It was moved to this blog in 2006. Introduction People do NOT remember 10% of what they read, 20% of what they see, 30% of what they hear, […]

People remember 10%, 20%…Oh Really?

Publication Note

This article was originally published on the Work-Learning Research website (www.work-learning.com) in 2002. It may have had some minor changes since then. It was moved to this blog in 2006.

Introduction

People do NOT remember 10% of what they read, 20% of what they see, 30% of what they hear, etc. That information, and similar pronouncements are fraudulent. Moreover, general statements on the effectiveness of learning methods are not credible—learning results depend on too many variables to enable such precision. Unfortunately, this bogus information has been floating around our field for decades, crafted by many different authors and presented in many different configurations, including bastardizations of Dale’s Cone. The rest of this article offers more detail.

My Search For Knowledge

My investigation of this issue began when I came across the following graph:

Chigra1

The Graph is a Fraud!

After reading the cited article several times and not seeing the graph—nor the numbers on the graph—I got suspicious and got in touch with the first author of the cited study, Dr. Michelene Chi of the University of Pittsburgh (who is, by the way, one of the world’s leading authorities on expertise). She said this about the graph:

“I don’t recognize this graph at all. So the citation is definitely wrong; since it’s not my graph.”

What makes this particularly disturbing is that this graph has popped up all over our industry, and many instructional-design decisions have been based on the information contained in the graph.

Bogus Information is Widespread

I often begin my workshops on instructional design and e-learning and my conference presentations with this graph as a warning and wake up call. Typically, over 90% of the audience raises their hands when I ask whether anyone has seen the numbers depicted in the graph. Later I often hear audible gasps and nervous giggles as the information is debunked. Clearly, lots of experienced professionals in our field know this graph and have used it to guide their decision making.

The graph is representative of a larger problem. The numbers presented on the graph have been circulating in our industry since the late 1960′s, and they have no research backing whatsoever. Dr. JC Kinnamon (2002) of Midi, Inc., searched the web and found dozens of references to those dubious numbers in college courses, research reports, and in vendor and consultant promotional materials.

Where the Numbers Came From

The bogus percentages were first published by an employee of Mobil Oil Company in 1967, writing in the magazine Film and Audio-Visual Communications. D. G. Treichler didn’t cite any research, but our field has unfortunately accepted his/her percentages ever since. NTL Institute still claims that they did the research that derived the numbers. See my response to NTL.

Michael Molenda, a professor at Indiana University, is currently working to track down the origination of the bogus numbers. His efforts have uncovered some evidence that the numbers may have been developed as early as the 1940′s by Paul John Phillips who worked at University of Texas at Austin and who developed training classes for the petroleum industry. During World War Two Phillips taught Visual Aids at the U. S. Army’s Ordnance School at the Aberdeen (Maryland) Proving Grounds, where the numbers have also appeared and where they may have been developed.

Strange coincidence: I was born on these very same Aberdeen Proving Grounds.

Ernie Rothkopf, professor emeritus of Columbia University, one of the world’s leading applied research psychologists on learning, reported to me that the bogus percentages have been widely discredited, yet they keep rearing their ugly head in one form or another every few years.

Many people now associate the bogus percentages with Dale’s “Cone of Experience,” developed in 1946 by Edgar Dale. It provided an intuitive model of the concreteness of various audio-visual media. Dale included no numbers in his model and there was no research used to generate it. In fact, Dale warned his readers not to take the model too literally. Dale’s Cone, copied without changes from the 3rd and final edition of his book, is presented below:

Dalesconegif

Dale’s Cone of Experience (Dale, 1969, p. 107)

You can see that Dale used no numbers with his cone. Somewhere along the way, someone unnaturally fused Dale’s Cone and Treichler’s dubious percentages. One common example is represented below.

Chigra2

The source cited in the diagram above by Wiman and Meierhenry (1969) is a book of edited chapters. Though two of the chapters (Harrison, 1969; Stewart, 1969) mention Dale’s Cone of Experience, neither of them includes the percentages. In other words, the diagram above is citing a book that does not include the diagram and does not include the percentages indicated in the diagram.

Here are some more examples:

From_josh_bersin_webinar_5262005_jpeg

Coneoflearning_1

Retentionchart_large

The “Evidence” Changes to Meet the Need of the Deceiver

The percentages, and the graph in particular, have been passed around in our field from reputable person to reputable person. The people who originally created the fabrications are to blame for getting this started, but there are clearly many people willing to bend the information to their own devices. Kinnamon’s (2002) investigation found that Treichler’s percentages have been modified in many ways, depending on the message the shyster wants to send. Some people have changed the relative percentages. Some have improved Treichler’s grammar. Some have added categories to make their point. For example, one version of these numbers says that people remember 95% of the information they teach to others.

People have not only cited Treichler, Chi, Wiman and Meierhenry for the percentages, but have also incorrectly cited William Glasser, and correctly cited a number of other people who have utilized Treichler’s numbers.

It seems clear from some of the fraudulent citations that deception was intended. On the graph that prompted our investigation, the title of the article had been modified from the original to get rid of the word “students.” The creator of the graph must have known that the term “students” would make people in the training / development / performance field suspicious that the research was done on children. The creator of Wiman and Meierhenry diagram did four things that make it difficult to track down the original source: (1) the book they cited is fairly obscure, (2) one of the authors names is spelled wrong, (3) the year of publication is incorrect, (4) and the name Charles Merrill, which was actually a publishing house, was ambiguously presented so that it might have referred to an author or editor.

But Don’t The Numbers Speak The Truth?

The numbers are not credible, and even if they made sense, they’d still be dangerous.

If we look at the numbers a little more closely, they are highly unconvincing. How did someone compare “reading” and “seeing?” Don’t you have to “see” to “read?” What does “collaboration” mean anyway? Were two people talking about the information they were learning? If so, weren’t they “hearing” what the other person had to say? What does “doing” mean? How much were they “doing” it? Were they “doing” it correctly, or did they get feedback? If they were getting feedback, how do we know the learning didn’t come from the feedback—not the “doing?” Do we really believe that people learn more “hearing” a lecture, than “reading” the same material? Don’t people who “read” have an advantage in being able to pace themselves and revisit material they don’t understand? And how did the research produce numbers that are all factors of ten? Doesn’t this suggest some sort of review of the literature? If so, shouldn’t we know how the research review was conducted? Shouldn’t we get a clear and traceable citation for such a review?

Even the idea that you can compare these types of learning methods is ridiculous. As any good research psychologist knows, the measurement situation affects the learning outcome. If we have a person learn foreign-language vocabulary by listening to an audiotape and vocalizing their responses, it doesn’t make sense to test them by having them write down their answers. We’d have a poor measure of their ability to verbalize vocabulary. The opposite is also nonsensical. People who learn vocabulary by seeing it on the written page cannot be fairly evaluated by asking them to say the words aloud. It’s not fair to compare these different methods by using the same test, because the choice of test will bias the outcome toward the learning situation that is most like the test situation.

But why not compare one type of test to another—for example, if we want to compare vocabulary learning through hearing and seeing, why don’t we use an oral test and written one? This doesn’t help either. It’s really impossible to compare two things on different indices. Can you imagine comparing the best boxer with the best golfer by having the boxer punch a heavy bag and having the golfer hit for distance? Would Muhammad Ali punching with 600 pounds of pressure beat Tiger Woods hitting his drives 320 yards off the tee?

The Importance of Listing Citations

Even if the numbers presented on the graph had been published in a refereed journal—research we were reasonably sure we could trust—it would still be dangerous not to know where they came from. Research conclusions have a way of morphing over time. Wasn’t it true ten years ago that all fat was bad? Newer research has revealed that monounsaturated oils like olive oil might actually be good for us. If a person doesn’t cite their sources, we might not realize that their conclusions are outdated or simply based on poor research. Conversely, we may also lose access to good sources of information. Suppose Teichler had really discovered a valid source of information? Because he/she did not use citations, that research would remain forever hidden in obscurity.

The context of research makes a great deal of difference. If we don’t know a source, we don’t really know whether the research is relevant to our situation. For example, an article by Kulik and Kulik (1988) concluded that immediate feedback was better than delayed feedback. Most people in the field now accept their conclusions. Efforts by Work-Learning Research to examine Kulik and Kulik’s sources indicated that most of the articles they reviewed tested the learners within a few minutes after the learning event, a very unrealistic analog for most training situations. Their sources enabled us to examine their evidence and find it faulty.

Who Should We Blame?

The original shysters are not the only ones to blame. The fact that many people who have disseminated the graph used the same incorrect citation makes it clear that they never accessed the original study. Everyone who uses a citation to make a point (or draw a conclusion) ought to check the citation. That, of course, includes all of us who are consumers of this information.

What Does This Tell Us About Our Field?

It tells us that we may not be able to trust the information that floats around our industry. It tells us that even our most reputable people and organizations may require the Wizard-of-Oz treatment—we may need to look behind the curtain to verify their claims.

The Danger To Our Field

At Work-Learning Research, our goal is to provide research-based information that practitioners can trust. We began our research efforts several years ago when we noticed that the field jumps from one fad to another while at the same time holding religiously to ideas that would be better cast aside.

The fact that our field is so easily swayed by the mildest whiffs of evidence suggests that we don’t have sufficient mechanisms in place to improve what we do. Because we’re not able or willing to provide due diligence on evidence-based claims, we’re unable to create feedback loops to push the field more forcefully toward continuing improvement.

Isn’t it ironic? We’re supposed to be the learning experts, but because we too easily take things for granted, we find ourselves skipping down all manner of yellow-brick roads.

How to Improve the Situation

It will seem obvious, but each and every one of us must take responsibility for the information we transmit to ensure its integrity. More importantly, we must be actively skeptical of the information we receive. We ought to check the facts, investigate the evidence, and evaluate the research. Finally, we must continue our personal search for knowledge—for it is only with knowledge that we can validly evaluate the claims that we encounter.

Our Citations

Chi, M. T. H., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science, 13, 145-182.

Dale, E. (1946, 1954, 1969). Audio-visual methods in teaching. New York: Dryden.

Harrison, R. (1969). Communication theory. In R. V. Wiman and W. C. Meierhenry (Eds.) Educational media: Theory into practice. Columbus, OH: Merrill.

Kinnamon, J. C. (2002). Personal communication, October 25.

Kulik, J. A., & Kulik, C-L. C. (1988). Timing of feedback and verbal learning. Review of Educational Research, 58, 79-97.

Molenda, M. H. (2003). Personal communications, February and March.

Rothkopf, E. Z. (2002). Personal communication, September 26.

Stewart, D. K. (1969). A learning-systems concept as applied to courses in education and training. In R. V. Wiman and W. C. Meierhenry (Eds.) Educational media: Theory into practice. Columbus, OH: Merrill.

Treichler, D. G. (1967). Are you missing the boat in training aids? Film and Audio-Visual Communication, 1, 14-16, 28-30, 48.

Wiman, R. V. & Meierhenry, W. C. (Eds.). (1969). Educational media: Theory into practice. Columbus, OH: Merrill.


physical activity shown to improve cognitive functions

Physical Exercise during Encoding Improves Vocabulary Learning in Young Female Adults: A Neuroendocrinological Study Abstract Acute physical activity has been repeatedly shown to improve various cognitive functions. However, there have been no investigations comparing the effects of exercise during verbal encoding versus exercise prior to encoding on long-term memory performance. In this current psychoneuroendocrinological study […]

Physical Exercise during Encoding Improves Vocabulary Learning in Young Female Adults: A Neuroendocrinological Study

Abstract

Acute physical activity has been repeatedly shown to improve various cognitive functions. However, there have been no investigations comparing the effects of exercise during verbal encoding versus exercise prior to encoding on long-term memory performance. In this current psychoneuroendocrinological study we aim to test whether light to moderate ergometric bicycling during vocabulary encoding enhances subsequent recall compared to encoding during physical rest and encoding after being physically active. Furthermore, we examined the kinetics of brain-derived neurotrophic factor (BDNF) in serum which has been previously shown to correlate with learning performance. We also controlled for the BDNF val66met polymorphism. We found better vocabulary test performance for subjects that were physically active during the encoding phase compared to sedentary subjects. Post-hoc tests revealed that this effect was particularly present in initially low performers. BDNF in serum and BDNF genotype failed to account for the current result. Our data indicates that light to moderate simultaneous physical activity during encoding, but not prior to encoding, is beneficial for subsequent recall of new items.