The hypothetico-deductive model

The hypothetico-deductive model or method is a proposed description of scientific method. According to it, scientific inquiry proceeds by formulating a hypothesis in a form that could conceivably be falsified by a test on observable data. A test that could and does run contrary to predictions of the hypothesis is taken as a falsification of […]

The hypothetico-deductive model or method is a proposed description of scientific method. According to it, scientific inquiry proceeds by formulating a hypothesis in a form that could conceivably be falsified by a test on observable data. A test that could and does run contrary to predictions of the hypothesis is taken as a falsification of the hypothesis. A test that could but does not run contrary to the hypothesis corroborates the theory. It is then proposed to compare the explanatory value of competing hypotheses by testing how stringently they are corroborated by their predictions.

One example of an algorithmic statement of the hypothetico-deductive method is as follows:[1]

1. Use your experience: Consider the problem and try to make sense of it. Gather data and look for previous explanations. If this is a new problem to you, then move to step 2.
2. Form a conjecture (hypothesis): When nothing else is yet known, try to state an explanation, to someone else, or to your notebook.
3. Deduce predictions from the hypothesis: if you assume 2 is true, what consequences follow?
4. Test (or Experiment): Look for evidence (observations) that conflict with these predictions in order to disprove 2. It is a logical error to seek 3 directly as proof of 2. Thisformal fallacy is called affirming the consequent.[2]

One possible sequence in this model would be 1, 2, 3, 4. If the outcome of 4 holds, and 3 is not yet disproven, you may continue with 3, 4, 1, and so forth; but if the outcome of 4shows 3 to be false, you will have to go back to 2 and try to invent a new 2, deduce a new 3, look for 4, and so forth.

Note that this method can never absolutely verify (prove the truth of) 2. It can only falsify 2.[3] (This is what Einstein meant when he said, “No amount of experimentation can ever prove me right; a single experiment can prove me wrong.”[4])

Despite the philosophical questions raised, the hypothetico-deductive model remains perhaps the best understood theory of scientific method.

The morality of evolution

The Descent of Man, and Selection in Relation to Sex is a book on evolutionary theory by English naturalist Charles Darwin, first published in 1871. It was Darwin’s second book on evolutionary theory, following his 1859 work, On the Origin of Species, in which he explored the concept of natural selection. In The Descent of Man, Darwin applies evolutionary theory to human evolution, and […]

The Descent of Man, and Selection in Relation to Sex is a book on evolutionary theory by English naturalist Charles Darwin, first published in 1871. It was Darwin’s second book on evolutionary theory, following his 1859 work, On the Origin of Species, in which he explored the concept of natural selection. In The Descent of Man, Darwin applies evolutionary theory to human evolution, and details his theory of sexual selection, a form of biological adaptation distinct from, yet interconnected with, natural selection. The book discusses many related issues, including evolutionary psychologyevolutionary ethics, differences between human races, differences between sexes, the dominant role of women in choosing mating partners, and the relevance of the evolutionary theory to society.


As a watchman on the tower, I feel to warn you that one of the chief means of misleading our youth and destroying the family unit is our educational institutions. There is more than one reason why the Church is advising our youth to attend colleges close to their homes where institutes of religion are available. It gives the parents the opportunity to stay close to their children, and if they become alerted and informed, these parents can help expose some of the deceptions of men like … Charles Darwin.

Ezra Taft Benson

More than other modern societies, United States relies, even depends, on myth to cement its confidence. Americans are profoundly ahistorical.

Our national myths are representations of identity and the actual instrument of acculturation. This process of acculturation through myth, moreover, is achieved through entertainment: television and movies. The culture of a society—its ethos—defines distinctive patterns of individual and group behavior. Culture shapes the way we look at the world. Whatever our immediate group membership, our final sense of identity is shaped by larger cultural patterns. If we define ourselves according to myth, what kind of worldview has it given us?

First, at the core,  the United States has an essentially religious value system. The primal myth of our origin is that of the “Pilgrim’s Progress,” with the Plymouth Colony completely overshadowing Virginia and its lineal transplanting of British class and caste. We believe that the source and inspiration of America is bound up in religion: religious freedom, but also the moral vantage of Calvin. The impact of Protestant thought is felt in the ways we talk about mission, service, sacrifice, restraint. It underlies the sense that Americans share of serving a higher calling. This underpinning remains dominant today even though it is highly secularized, and transmuted into legal, constitutional language.

Second, Americans still hew a set of specific myths about the United States. One of these is that America is the source of human progress and can achieve perfection as a society. Americans believe that there has never been a society quite like our own. This American “exceptionalism” suggests that we are a people graced with unusual natural endowments. We think of ourselves literally as a “people of plenty.” But our mythology also reminds us that this land was a great “untamed wilderness,” a “land of savagery.” It was the exceptional will, unity and vision of the American people and their beliefs that transformed the landscape. The twin icons of national bounty and national achievement have inspired two senses of an American national purpose: a conviction that the United States should serve as an example to the world, that America and its people are the model for all human development; and an impulse to change the world for good, to become the active agency of human progress. Tyranny and resistance to change are so entrenched in the world that only direct American intercession can shift the direction of history. America’s gifts demand that it assume a missionary role.

In the United States at the turn of the 20th century, Darwinism was greeted with glee because it seemed so compatible with the prevailing ideology of theday,  where robber-baron capitalists like the Carnegies, Mellons, Sumners, Stanfords and yes, even Jack London, could not stop rattling on about how the “survival of the fittest” justified crushing unions, exploiting immigrant labor or being left unregulated to amass huge fortunes while administering monopolies. In the popular ethos of the United States, there is a confusion of Capitalism with the American worship of the individual and the nuclear family. It can be argued that these ideas are related but they are different and independent. According to the American work ethic you only get what you work for, but this is not what Capitalism is. Capitalism is the idea that market forces, carried out by intelligent agents looking for profit (self interest), let by themselves will generate wealth and prosperity for society as a whole. The dichotomy Capitalism/Socialism is actually dated. If one understands socialism as government control of the economy, all, 100%, of the world’s governments are socialist to some degree. In any case, we now live in a competitive society and are often told that to get ahead we require drive, commitment and determination, that we must expend a great amount of energy and, if necessary, use force to get what we want. A ‘survival of the fittest’ mentality is deeply entrenched in our culture. Despite the fact that this Wild West mentality  is a historical byproduct, it is now attributed to Darwin’s Origin of the Species.

Religious fundamentalists are sincere on their view of the World as a battleground between Good and Evil. For them anything that undermines faith in God, specially with regards to children, is utterly evil. The teaching of Science to children, in particular Evolution, is seen as a threat to children indoctrination. Nonetheless,  the attack on Evolution is an attack on Science as a whole. Science is not about what to believe but rather a method to perceive Reality. It is the critical objective look at reality aspect of Science that is perceived as a treat by the religious establishment. However, teaching religious ideas as an alternative to factual descriptions of reality undermines science education by misinforming students about the scientific method — the basis for science literacy.

The scientific method teaches students the fundamentals of science — how to observe data, perform experiments and form scientific theory. Religious explanations for creation are not science – they cannot be confirmed or denied by the scientific method. Teaching them as science confuses and misleads students about the scientific method, thereby warping their ability to live in a technology-driven society

Most people don’t read scientific papers because they are extremely complex. Even college science students have a hard time digesting scientific papers. But what is easy to understand is that, since the bible says this, science says that, therefore science is the devil, and since we hate the devil and our job is to fight him, we must hate science and fight it. Christian leaders can be blind sighted to the outside world at times. All this commotion about a science that goes against the bible. The Bible today, still says that the Earth does not move around the sun as much as it did thousands of years ago. The Bible did not change. At the end of the Middle Ages, Christian leaders threatened heavy punishment to Galileo for suggesting that, based on his scientific evidences, the Earth revolved around the Sun.

Any effort to introduce a theological doctrine into public school science curricula would inevitably offend some teachers and students. After all, a Protestant fundamentalist’s “literal” reading of Genesis would likely differ markedly from that of a Catholic or an Orthodox Jew. Both public school educators and religious leaders should be concerned about the prospect of biology lessons degenerating into debates on Biblical or religious interpretation.

Evolution by natural selection, at its core, works like this: living organisms are characterized by heritable variation for traits that affect their survival and reproductive abilities. This heritable variation originates from the (truly random) process of mutation at the level of DNA. The process of evolution turns out to be largely the result of two components: mutations (which are random) and natural selection (which, again, is not random). It is the joint outcome of these two processes that—according to evolutionary theory—explains not only the diversity of all organisms on Earth, but most crucially the fact that they are so well adapted to their environment: those that weren’t did not survive the process. Because the environment changes overtime, and therefore, what characteristics of life forms are better changes, and it cannot be said in absolute terms that extinct forms are inferior to those present today.

You may find it intuitively difficult to believe that two relatively simple natural processes can produce the complex order we observe in living organisms. But the beauty of science is that it so often shows our intuitions to be wrong. Because nature does not always function according to our common sense or intuition, the scientific method a necessity on the quest of the human race for survival.

Evolution is both a theory and a fact, contrary to simplistic creationist views. How can this be? Evolution is a fact in the sense that it is beyond reasonable doubt that living organisms have changed over time throughout the history of the earth. It is a theory in the sense that biologists have proposed a variety of mechanisms (including, but not limited to, mutation and natural selection) to explain the fact of evolution.

The theory of evolution is a fundamental concept of biology and it is supported by the overwhelming weight of scientific evidence. Simply eliminating evolution from the public school curriculum in order to ease community tensions would do a great disservice to all students. It would deny public school students an adequate science education – which is more and more becoming a necessity for professional success in a high-tech world.

It must be said that there is a propagandistic perversion of language, and there are religious groups that use the language of science to mislead and actually undermine a scientific conceptualization of Reality. Religious opponents of evolution have cloaked religious beliefs in scientific sounding language and then mandating that schools teach the resulting “creation science” or “Intelligent Design” as an alternative to evolution. Intelligent Design organizations are fundamentalist religious entities that consider the introduction of creation science into the public schools part of their ministry. Creation science rested on a “contrived dualism” that recognized only two possible explanations for life, the scientific theory of evolution and biblical creationism, treated the two as mutually exclusive such that “one must either accept the literal interpretation of Genesis or else believe in the godless system of evolution,” and accordingly viewed any critiques of evolution as evidence that necessarily supported biblical creationism. Creation science is simply not science because it depends upon supernatural intervention, which cannot be explained by natural causes, or be proven through empirical investigation, and is therefore neither testable nor falsifiable.

The argument for Intelligent Design (ID) is not a new scientific argument, but is rather an old religious argument for the existence of God, traced back to at least Thomas Aquinas in the 13th century, who framed the argument as a syllogism: Wherever complex design exists, there must have been a designer; nature is complex; therefore nature must have had an intelligent designer. Although proponents of ID occasionally suggest that the designer could be a space alien or a time-traveling cell biologist, no serious alternative to God as the designer has been proposed. The writings of leading ID proponents reveal that the designer postulated by their argument is the God of Christianity. Dramatic evidence of ID’s religious nature and aspirations is found in what is referred to as the “Wedge Document.” The Wedge Document, developed by the Discovery Institute’s Center for Renewal of Science and Culture. The Discovery Institute, the think tank promoting ID whose CRSC developed the Wedge Document, acknowledges as “Governing Goals” to “defeat scientific materialism and its destructive moral, cultural and political legacies” and “replace materialistic explanations with the theistic understanding that nature and human beings are created by God.”

ID fails on three different levels, any one of which is sufficient to preclude a determination that ID is science. They are: (1) ID violates the centuries-old ground rules of science by invoking and permitting supernatural causation; (2) the argument of irreducible complexity, central to ID, employs the same flawed and illogical contrived dualism that doomed creation science in the 1980′s; and (3) ID’s negative attacks on evolution have been refuted by the scientific community.

Because Science wins over Religion on factual description of Reality, the attack on Science is made nowadays on moral grounds.  From the point of view of religious fundamentalists, Science is a competing religion, although a silly one at that. Then the scientific community is under attack with this straw-man argument against evolution:

But if design, conversely, is rational, why do so many scientists reject it? Because this is not an issue of science, but of religion. Their religion is that of materialism and naturalism, and they are under no illusions as to the implications of design.

James M Tour, in the blog entry Layman’s Reflections on Evolution and Creation. An Insider’s View of the Academy, claims insufficient understanding of what he calls Macroevolution. Macroevolution is evolution on a scale of separated gene pools.[1] Macroevolutionary studies focus on change that occurs at or above the level of species, in contrast with microevolution,[2] which refers to smaller evolutionary changes (typically described as changes in allele frequencies) within a species or population. However, contrary to claims by creationists, macro and microevolution describe fundamentally identical processes on different time scales.

Russian entomologist Yuri Filipchenko first coined the terms “macroevolution” and “microevolution” in 1927 in his German language work, “Variabilität und Variation”. Since the inception of the two terms, their meanings have been revised several times and the term macroevolution fell into limited disfavour when it was taken over by such writers as the geneticist Richard Goldschmidt (1940) and the paleontologist Otto Schindewolf to describe their orthogenetic theories.[7]

A more practical definition of the term describes it as changes occurring on geological time scales, in contrast to microevolution, which occurs on the timescale of human lifetimes.[8] This definition reflects the spectrum between micro- and macro-evolution, whilst leaving a clear difference between the terms: because the geological record rarely has a resolution better than 10,000 years, and humans rarely live longer than 100 years, “meso-evolution” is never observed.[8]

As a result, apart from Dobzhansky, Bernhard Rensch and Ernst Mayr, very few neo-Darwinian writers used the term, preferring instead to talk of evolution as changes in allele frequencies without mention of the level of the changes (above species level or below). Those who did were generally working within the continental European traditions (as Dobzhansky, Ernst Mayr, Bernhard Rensch, Richard Goldschmidt, and Otto Schindewolf were) and those who didn’t were generally working within the Anglo-American tradition (such as John Maynard Smith and Richard Dawkins). Hence, use of the term “macroevolution” is sometimes wrongly used as a litmus test of whether the writer is “properly” neo-Darwinian or not.

At the end of his article, Tour makes a reference to the movie, “Expelled. No Intelligence Allowed.”, a pro-intelligent design movie, which among other claims, strongly implies that Charles Darwin‘s ideas led to Adolf Hitler‘s atrocities. Tour asserts that a subset of the scientific establishment is retarding the careers of Darwinian skeptics. He closes citing  Viktor Frankl , The Doctor and the Soul with the comment If Frankl is correct, God help us:

“If we present a man with a concept of man which is not true, we may well corrupt him. When we present man as an automaton of reflexes, as a mind-machine, as a bundle of instincts, as a pawn of drives and reactions, as a mere product of instinct, heredity and environment, we feed the nihilism to which modern man is, in any case, prone.
“I became acquainted with the last stage of that corruption in my second concentration camp, Auschwitz. The gas chambers of Auschwitz were the ultimate consequence of the theory that man is nothing but the product of heredity and environment; or as the Nazi liked to say, ‘of Blood and Soil.’ I am absolutely convinced that the gas chambers of Auschwitz, Treblinka, and Maidanek were ultimately prepared not in some Ministry or other in Berlin, but rather at the desks and lecture halls of nihilistic scientists and philosophers [emphasis added].”

The movie Expelled main theme is that what it calls Darwinism inherently contain the seeds of Nazism, and even more Darwinism equals Nazism. This frighteningly immoral narrative is capped off a la Moore, with shots of the Berlin Wall, old stock footage of East German police kicking around those trying to escape through the wall to the West and some solemn blather by Ben, who calls upon each one of us to rise up in defense of freedom and knock down a few walls in order to get creationism back into the curriculum at American Schools.

From Darwin to Hitler: evolutionary ethics, eugenics, and racism in Germany is a 2004 book by Richard Weikart, a historian at California State University, Stanislaus,[1] and a senior fellow for the Center for Science and Culture of the Discovery Institute.[2] The work is controversial.[3] Graeme Gooday, John M. Lynch, Kenneth G. Wilson, and Constance K. Barsky wrote that “numerous reviews have accused Weikart of selectively viewing his rich primary material, ignoring political, social, psychological, and economic factors” that helped shape Nazi eugenics and racism.

The Discovery Institute, the hub of the intelligent design movement, “provided crucial funding” for the book’s research.[5] The Institute operates DarwinToHitler.com, which promotes the book and intelligent design.[6] Prominent historian and critic of the intelligent design movement, Barbara Forest, states that the book is tied to the DI’s ‘wedge strategy‘ of attacking Darwinian science as morally corrupting.[7] This strategy aims to “defeat [the] materialist world view” represented by the theory of evolution in favor of “a science consonant with Christian and theistic convictions.”[8]

Weikart has appeared in creationist films promoting the book. In 2006, Weikart appeared in Coral Ridge Ministriescreationist film Darwin’s Deadly Legacy in which Weikart claims “Darwinian ideology is the core” of Nazism and D. James Kennedy concludes: “To put it simply, no Darwin, no Hitler.”[9][10] In 2008, Weikart, a supporter of intelligent design,[11] also appeared in Expelled: No Intelligence Allowed.  In fact, scientific theories, even those like Darwin’s that address organic life, are morally neutral.”   Creationist organizations, like Creation Ministries International cite Weikart’s work claiming it shows “extensive documentation of the Darwin–Hitler link.”

There were many nations, such as Britain which embraced Darwinism but saw a considerable number of their population killed trying to eliminate Nazism. There were other nations, such as the Soviet Union, where Darwinism was seen as so dangerous and subversive to State sponsored dreams of social engineering that those who espoused it were killed or exiled and a complete biological fairy tale, Lysenkoism, put into classrooms and agricultural policy ultimately leading to the deaths of millions from starvation.

Now, Christian groups are tying a neutral scientific theory to racism, antisemitism and xenophobia.That is extremely irresponsible and untrue. In fact, Christianity has a stronger link to anti-semiticism and xenophobia than Evolution which is a scientific theory that purports every man is from the same ancestor.

Throughout history, especially in the Crusades, European Christianity has consistently been a xenophobic culture – Jews were expelled out of England, were treated as second class citizens by Christians, and were not allowed to own lands. Black people were expelled by the Protestant Queen Elizabeth during the food shortage in England. Nazi Hitler, had Christianic themes in support of his treatment of the Jews.

The linking of Nazism to Evolution is a dishonest and cheap attempt at trying to personify a scientific theory as the root of all evil in the world. Evolution implies is that every human came from a single ancestor. Darwin himself was anti-slavery and he said that there was “no clear distinctive characteristics to categorize races as separate species, and that all shared very similar physical and mental characteristics indicating common ancestry”. However this went against Christian beliefs of that time. A German philosopher, Georg Wilhelm Friedrich Hegel, who wrote “Life of Jesus”, “The Positivity of the Christian Religion” and thought to be Christian by many critics believed that scientific racism – or the use of science to propose that other races such as blacks are of different heritage and descended from apes “fitted well with the Christian belief of a divine Creation following which all of humanity descended from the same Adam and Eve.

The Bible sanctions slavery, and from the 1820s to the 1850s it was cited in the Southern States of the United States of America to support the idea that negroes had been created unequal, suited to slavery, by writers such as the Rev. Richard Furman, Joseph Smith Jr. and Thomas R. Cobb.” (http://en.wikipedia.org/wiki/Scientific_racism).

Christians are very uncomfortable with the idea that Adam and Eve were Africans – who, by the now debunked scientific racism are deemed to be descendants of apes. This was a central Christian tenet for much more years than evolution was around, and it was the catalyst for the systematic degradation of a particular group of people – the fact that black people were descendants of apes, gave Christians the biblical right to rule over them. Now that evolution has equalized and showed that all men are equal, and given the current taboo of identifying oneself as racist as well as the demise of Scientific racism. Many xenophobic people turn to Intelligent Design as their last ditch attempt to salvage some element of supernatural support for dominion over a certain group of people. This does not mean all Intelligent Design supporters are racists, but it is certainly a comfortable place for xenophobic individuals to channel their energies to.

Cargo Cult Science

Richard Feynman From a Caltech commencement address given in 1974 Also in “Surely You’re Joking, Mr. Feynman!”: Adventures of a Curious Character During the Middle Ages there were all kinds of crazy ideas, such as that a piece of of rhinoceros horn would increase potency. Then a method was discovered for separating the ideas–which was […]

Richard Feynman

From a Caltech commencement address given in 1974
Also in “Surely You’re Joking, Mr. Feynman!”: Adventures of a Curious Character

During the Middle Ages there were all kinds of crazy ideas, such as that a piece of of rhinoceros horn would increase potency. Then a method was discovered for separating the ideas–which was to try one to see if it worked, and if it didn’t work, to eliminate it. This method became organized, of course, into science. And it developed very well, so that we are now in the scientific age. It is such a scientific age, in fact, that we have difficulty in understanding how witch doctors could ever have existed, when nothing that they proposed ever really worked–or very little of it did.

But even today I meet lots of people who sooner or later get me into a conversation about UFO’s, or astrology, or some form of mysticism, expanded consciousness, new types of awareness, ESP, and so forth. And I’ve concluded that it’s not a scientific world.

Most people believe so many wonderful things that I decided to investigate why they did. And what has been referred to as my curiosity for investigation has landed me in a difficulty where I found so much junk that I’m overwhelmed. First I started out by investigating various ideas of mysticism and mystic experiences. I went into isolation tanks and got many hours of hallucinations, so I know something about that. Then I went to Esalen, which is a hotbed of this kind of thought (it’s a wonderful place; you should go visit there). Then I became overwhelmed. I didn’t realize how MUCH there was.

At Esalen there are some large baths fed by hot springs situated on a ledge about thirty feet above the ocean. One of my most pleasurable experiences has been to sit in one of those baths and watch the waves crashing onto the rocky slope below, to gaze into the clear blue sky above, and to study a beautiful nude as she quietly appears and settles into the bath with me.

One time I sat down in a bath where there was a beatiful girl sitting with a guy who didn’t seem to know her. Right away I began thinking, “Gee! How am I gonna get started talking to this beautiful nude woman?”

I’m trying to figure out what to say, when the guy says to her, “I’m, uh, studying massage. Could I practice on you?” “Sure,” she says. They get out of the bath and she lies down on a massage table nearby. I think to myself, “What a nifty line! I can never think of anything like that!” He starts to rub her big toe. “I think I feel it,” he says. “I feel a kind of dent–is that the pituitary?” I blurt out, “You’re a helluva long way from the pituitary, man!” They looked at me, horrified–I had blown my cover–and said, “It’s reflexology!” I quickly closed my eyes and appeared to be meditating.

That’s just an example of the kind of things that overwhelm me. I also looked into extrasensory perception, and PSI phenomena, and the latest craze there was Uri Geller, a man who is supposed to be able to bend keys by rubbing them with his finger. So I went to his hotel room, on his invitation, to see a demonstration of both mindreading and bending keys. He didn’t do any mindreading that succeeded; nobody can read my mind, I guess. And my boy held a key and Geller rubbed it, and nothing happened. Then he told us it works better under water, and so you can picture all of us standing in the bathroom with the water turned on and the key under it, and him rubbing the key with his finger. Nothing happened. So I was unable to investigate that phenomenon.

But then I began to think, what else is there that we believe? (And I thought then about the witch doctors, and how easy it would have been to check on them by noticing that nothing really worked.) So I found things that even more people believe, such as that we have some knowledge of how to educate. There are big schools of reading methods and mathematics methods, and so forth, but if you notice, you’ll see the reading scores keep going down–or hardly going up–in spite of the fact that we continually use these same people to improve the methods. There’s a witch doctor remedy that doesn’t work. It ought to be looked into; how do they know that their method should work? Another example is how to treat criminals. We obviously have made no progress–lots of theory, but no progress–in decreasing the amount of crime by the method that we use to handle criminals.

Yet these things are said to be scientific. We study them. And I think ordinary people with commonsense ideas are intimidated by this pseudoscience. A teacher who has some good idea of how to teach her children to read is forced by the school system to do it some other way–or is even fooled by the school system into thinking that her method is not necessarily a good one. Or a parent of bad boys, after disciplining them in one way or another, feels guilty for the rest of her life because she didn’t do “the right thing,” according to the experts.

So we really ought to look into theories that don’t work, and science that isn’t science.

I think the educational and psychological studies I mentioned are examples of what I would like to call cargo cult science. In the South Seas there is a cargo cult of people. During the war they saw airplanes with lots of good materials, and they want the same thing to happen now. So they’ve arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head to headphones and bars of bamboo sticking out like antennas–he’s the controller–and they wait for the airplanes to land. They’re doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn’t work. No airplanes land. So I call these things cargo cult science, because they follow all the apparent precepts and forms of scientific investigation, but they’re missing something essential, because the planes don’t land.

Now it behooves me, of course, to tell you what they’re missing. But it would be just about as difficult to explain to the South Sea islanders how they have to arrange things so that they get some wealth in their system. It is not something simple like telling them how to improve the shapes of the earphones. But there is one feature I notice that is generally missing in cargo cult science. That is the idea that we all hope you have learned in studying science in school–we never say explicitly what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty–a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid–not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked–to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can–if you know anything at all wrong, or possibly wrong–to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

In summary, the idea is to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgement in one particular direction or another.

The easiest way to explain this idea is to contrast it, for example, with advertising. Last night I heard that Wesson oil doesn’t soak through food. Well, that’s true. It’s not dishonest; but the thing I’m talking about is not just a matter of not being dishonest; it’s a matter of scientific integrity, which is another level. The fact that should be added to that advertising statement is that no oils soak through food, if operated at a certain temperature. If operated at another temperature, they all will–including Wesson oil. So it’s the implication which has been conveyed, not the fact, which is true, and the difference is what we have to deal with.

We’ve learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature’s phenomena will agree or they’ll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven’t tried to be very careful in this kind of work. And it’s this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in cargo cult science.

A great deal of their difficulty is, of course, the difficulty of the subject and the inapplicability of the scientific method to the subject. Nevertheless, it should be remarked that this is not the only difficulty. That’s why the planes don’t land–but they don’t land.

We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It’s a little bit off because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.

Why didn’t they discover the new number was higher right away? It’s a thing that scientists are ashamed of–this history–because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong–and they would look for and find a reason why something might be wrong. When they got a number close to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that. We’ve learned those tricks nowadays, and now we don’t have that kind of a disease.

But this long history of learning how to not fool ourselves–of having utter scientific integrity–is, I’m sorry to say, something that we haven’t specifically included in any particular course that I know of. We just hope you’ve caught on by osmosis

The first principle is that you must not fool yourself–and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.

I would like to add something that’s not essential to the science, but something I kind of believe, which is that you should not fool the layman when you’re talking as a scientist. I am not trying to tell you what to do about cheating on your wife, or fooling your girlfriend, or something like that, when you’re not trying to be a scientist, but just trying to be an ordinary human being. We’ll leave those problems up to you and your rabbi. I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you’re maybe wrong, that you ought to have when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen.

For example, I was a little surprised when I was talking to a friend who was going to go on the radio. He does work on cosmology and astronomy, and he wondered how he would explain what the applications of his work were. “Well,” I said, “there aren’t any.” He said, “Yes, but then we won’t get support for more research of this kind.” I think that’s kind of dishonest. If you’re representing yourself as a scientist, then you should explain to the layman what you’re doing– and if they don’t support you under those circumstances, then that’s their decision.

One example of the principle is this: If you’ve made up your mind to test a theory, or you want to explain some idea, you should always decide to publish it whichever way it comes out. If we only publish results of a certain kind, we can make the argument look good. We must publish BOTH kinds of results.

I say that’s also important in giving certain types of government advice. Supposing a senator asked you for advice about whether drilling a hole should be done in his state; and you decide it would be better in some other state. If you don’t publish such a result, it seems to me you’re not giving scientific advice. You’re being used. If your answer happens to come out in the direction the government or the politicians like, they can use it as an argument in their favor; if it comes out the other way, they don’t publish at all. That’s not giving scientific advice.

Other kinds of errors are more characteristic of poor science. When I was at Cornell, I often talked to the people in the psychology department. One of the students told me she wanted to do an experiment that went something like this–it had been found by others that under certain circumstances, X, rats did something, A. She was curious as to whether, if she changed the circumstances to Y, they would still do A. So her proposal was to do the experiment under circumstances Y and see if they still did A.

I explained to her that it was necessary first to repeat in her laboratory the experiment of the other person–to do it under condition X to see if she could also get result A, and then change to Y and see if A changed. Then she would know the the real difference was the thing she thought she had under control.

She was very delighted with this new idea, and went to her professor. And his reply was, no, you cannot do that, because the experiment has already been done and you would be wasting time. This was in about 1947 or so, and it seems to have been the general policy then to not try to repeat psychological experiments, but only to change the conditions and see what happened.

Nowadays, there’s a certain danger of the same thing happening, even in the famous field of physics. I was shocked to hear of an experiment being done at the big accelerator at the National Accelerator Laboratory, where a person used deuterium. In order to compare his heavy hydrogen results to what might happen with light hydrogen, he had to use data from someone else’s experiment on light hydrogen, which was done on different apparatus. When asked why, he said it was because he couldn’t get time on the program (because there’s so little time and it’s such expensive apparatus) to do the experiment with light hydrogen on this apparatus because there wouldn’t be any new result. And so the men in charge of programs at NAL are so anxious for new results, in order to get more money to keep the thing going for public relations purposes, they are destroying–possibly–the value of the experiments themselves, which is the whole purpose of the thing. It is often hard for the experimenters there to complete their work as their scientific integrity demands.

All experiments in psychology are not of this type, however. For example, there have been many experiments running rats through all kinds of mazes, and so on–with little clear result. But in 1937 a man named Young did a very interesting one. He had a long corridor with doors all along one side where the rats came in, and doors along the other side where the food was. He wanted to see if he could train the rats to go in at the third door down from wherever he started them off. No. The rats went immediately to the door where the food had been the time before.

The question was, how did the rats know, because the corridor was so beautifully built and so uniform, that this was the same door as before? Obviously there was something about the door that was different from the other doors. So he painted the doors very carefully, arranging the textures on the faces of the doors exactly the same. Still the rats could tell. Then he thought maybe the rats were smelling the food, so he used chemicals to change the smell after each run. Still the rats could tell. Then he realized the rats might be able to tell by seeing the lights and the arrangement in the laboratory like any commonsense person. So he covered the corridor, and still the rats could tell.

He finally found that they could tell by the way the floor sounded when they ran over it. And he could only fix that by putting his corridor in sand. So he covered one after another of all possible clues and finally was able to fool the rats so that they had to learn to go in the third door. If he relaxed any of his conditions, the rats could tell.

Now, from a scientific standpoint, that is an A-number-one experiment. That is the experiment that makes rat-running experiments sensible, because it uncovers that clues that the rat is really using– not what you think it’s using. And that is the experiment that tells exactly what conditions you have to use in order to be careful and control everything in an experiment with rat-running.

I looked up the subsequent history of this research. The next experiment, and the one after that, never referred to Mr. Young. They never used any of his criteria of putting the corridor on sand, or being very careful. They just went right on running the rats in the same old way, and paid no attention to the great discoveries of Mr. Young, and his papers are not referred to, because he didn’t discover anything about the rats. In fact, he discovered all the things you have to do to discover something about rats. But not paying attention to experiments like that is a characteristic example of cargo cult science.

Another example is the ESP experiments of Mr. Rhine, and other people. As various people have made criticisms–and they themselves have made criticisms of their own experiements–they improve the techniques so that the effects are smaller, and smaller, and smaller until they gradually disappear. All the para-psychologists are looking for some experiment that can be repeated–that you can do again and get the same effect–statistically, even. They run a million rats–no, it’s people this time–they do a lot of things are get a certain statistical effect. Next time they try it they don’t get it any more. And now you find a man saying that is is an irrelevant demand to expect a repeatable experiment. This is science?

This man also speaks about a new institution, in a talk in which he was resigning as Director of the Institute of Parapsychology. And, in telling people what to do next, he says that one of things they have to do is be sure the only train students who have shown their ability to get PSI results to an acceptable extent–not to waste their time on those ambitious and interested students who get only chance results. It is very dangerous to have such a policy in teaching–to teach students only how to get certain results, rather than how to do an experiment with scientific integrity.

So I have just one wish for you–the good luck to be somewhere where you are free to maintain the kind of integrity I have described, and where you do not feel forced by a need to maintain your position in the organization, or financial support, or so on, to lose your integrity. May you have that freedom.

[photo] — Richard Feynman

The candle problem

Uploaded on Aug 25, 2009 http://www.ted.com Career analyst Dan Pink examines the puzzle of motivation, starting with a fact that social scientists know but most managers don’t: Traditional rewards aren’t always as effective as we think. Listen for illuminating stories — and maybe, a way forward. The candle problem or candle task, also known as […]

Uploaded on Aug 25, 2009

http://www.ted.com Career analyst Dan Pink examines the puzzle of motivation, starting with a fact that social scientists know but most managers don’t: Traditional rewards aren’t always as effective as we think. Listen for illuminating stories — and maybe, a way forward.

The candle problem or candle task, also known as Duncker’s candle problem, is a cognitive performance test, measuring the influence of functional fixedness on a participant’s problem solving capabilities. The test was created [1] by Gestalt psychologist Karl Duncker and published posthumously in 1945. Duncker originally presented this test in his thesis on problem solving tasks at Clark University.

The test presents the participant with the following task: how to fix a lit candle on a wall (a cork board) in a way so the candle wax won’t drip onto the table below.[3] To do so, one may only use the following along with the candle:

  • a book of matches
  • a box of thumbtacks

The solution is to empty the box of thumbtacks, put the candle into the box, use the thumbtacks to nail the box (with the candle in it) to the wall, and light the candle with the match.[3] The concept of functional fixedness predicts that the participant will only see the box as a device to hold the thumbtacks and not immediately perceive it as a separate and functional component available to be used in solving the task.

Response

Many of the people who attempted the test explored other creative, but less efficient, methods to achieve the goal. For example, some tried to tack the candle to the wall without using the thumbtack box,[4] and others attempted to melt some of the candle’s wax and use it as an adhesive to stick the candle to the wall.[1] Neither method works.[1] However, if the task is presented with the tacks piled next to the box (rather than inside it), virtually all of the participants were shown to achieve the optimal solution, which is self defined.[4]

The test has been given to numerous people, including M.B.A. students at the Kellogg School of Management in a study investigating whether living abroad and creativity are linked.[5]

Glucksberg

Glucksberg (1962)[6] used a 2 × 2 design manipulating whether the tacks and matches were inside or outside of their boxes and whether subjects were offered cash prizes for completing the task quickly. Subjects who were offered no prize, termed low-drive, were told “We are doing pilot work on various problems in order to decide which will be the best ones to use in an experiment we plan to do later. We would like to obtain norms on the time needed to solve.” The remaining subjects, termed high-drive, were told “Depending on how quickly you solve the problem you can win $5.00 or $20.00. The top 25% of the Ss [subjects] in your group will win $5.00 each; the best will receive $20.00. Time to solve will be the criterion used.” (As a note, adjusting for inflation since 1962, the study’s publish year, the amounts in today’s dollars would be approximately $39 and $154, respectively.[7]) The empty-boxes condition was found to be easier than the filled-boxes condition: more subjects solved the problem, and those who did solve the problem solved it faster. Within the filled-boxes condition, high-drive subjects performed worse than low-drive subjects. Glucksberg interpreted this result in terms of “neobehavioristic drive theory”: “high drive prolongs extinction of the dominant habit and thus retards the correct habit from gaining ascendancy”. An explanation in terms of the overjustification effect is made difficult by the lack of a main effect for drive and by a nonsignificant trend in the opposite direction within the empty-boxes condition.

Another way to explain the higher levels of failure during the high-drive condition is that the process of turning the task into a competition for limited resources can create mild levels of stress in the subject, which can lead to the Sympathetic nervous system, otherwise known as the Fight-or-flight response, taking over the brain and body. This stress response effectively shuts down the creative thinking and problem solving areas of the brain in the prefrontal cortex.

Linguistic implications

E. Tory Higgins and W. M. Chaires found that having subjects repeat the names of common pairs of objects in this test, but in a different and unaccustomed linguistic structure, such as “box and tacks” instead of “box of tacks”, facilitated performance on the candle problem.[3] This phrasing helps one to distinguish the two entities as different and more accessible.[3]

In a written version of the task given to people at Stanford University, Michael C. Frank and language acquisition researcher Michael Ramscar reported that simply underlining certain relevant materials (“on the table there is a candle, a box of tacks, and a book of matches…”) increases the number of candle-problem solvers from 25% to 50%.[4]

References

  1. ^ Jump up to: a b c “Dan Pink on the surprising science of motivation”. Retrieved 16 January 2010.
  2. Jump up ^ Daniel Biella and Wolfram Luther. “A Synthesis Model for the Replication of Historical Experiments in Virtual Environments”. 5th European Conference on e-Learning. Academic Conferences Limited. p. 23. ISBN 978-1-905305-30-8.
  3. ^ Jump up to: a b c d Richard E. Snow and Marshall J. Farr, ed. (1987). “Positive Affect and Organization”. Aptitude, Learning, and Instruction Volume 3: Conative and Affective Process Analysis. Routledge. ISBN 978-0-89859-721-9.
  4. ^ Jump up to: a b c Frank, Michael. “Against Informational Atomism”. Retrieved 15 January 2010.
  5. Jump up ^ “Living Outside the Box: Living abroad boosts creativity”. April 2009. Retrieved 16 January 2010.
  6. Jump up ^ Glucksberg, S. (1962). “The influence of strength of drive on functional fixedness and perceptual recognition”. Journal of Experimental Psychology 63: 36–41. doi:10.1037/h0044683. PMID 13899303. edit
  7. Jump up ^ Inflated values automatically calculated.

Cognitive bias

The notion of cognitive biases was introduced by Amos Tversky and Daniel Kahneman in 1972 and grew out of their experience of people’s innumeracy, or inability to reason intuitively with the greater orders of magnitude. They and their colleagues demonstrated several replicable ways in which human judgments and decisions differ from rational choice theory. They […]

The notion of cognitive biases was introduced by Amos Tversky and Daniel Kahneman in 1972 and grew out of their experience of people’s innumeracy, or inability to reason intuitively with the greater orders of magnitude. They and their colleagues demonstrated several replicable ways in which human judgments and decisions differ from rational choice theory. They explained these differences interms of heuristics; rules which are simple for the brain to compute but introduce systematic errors. For instance the availability heuristic, when the ease with which something comes to mindis used toindicate how often (or how recently)it has been encountered.

Cognitive bias is any of a wide range of observer effects identified in cognitive science, including very basic statistical and memory errors that are common to all human beings (many of which have been discussed by Amos Tversky and Daniel Kahneman) and drastically skew the reliability of anecdotal and legal evidence. They also significantly affect the scientific method which is deliberately designed to minimize such bias from any one observer.

cognitive bias is a pattern of deviation in judgment that occurs in particular situations, which may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, or what is broadly called irrationality.[1][2][3] Implicit in the concept of a “pattern of deviation” is a standard of comparison with what is normatively expected; this may be the judgment of people outside those particular situations, or may be a set of independently verifiable facts. A continually evolving list of cognitive biases has been identified over the last six decades of research on human judgment and decision-making in cognitive sciencesocial psychology, and behavioral economics.

Some cognitive biases are presumably adaptive, for example, because they lead to more effective actions in a given context or enable faster decisions when timeliness is more valuable than accuracy (heuristics). Others presumably result from a lack of appropriate mental mechanisms (bounded rationality), or simply from a limited capacity for information processing.

Biases can be distinguished on a number of dimensions. For example, there are biases specific to groups (such as the risky shift) as well as biases at the individual level.

Some biases affect decision-making, where the desirability of options has to be considered (e.g., Sunk Cost fallacy). Others such as Illusory correlation affect judgment of how likely something is, or of whether one thing is the cause of another. A distinctive class of biases affect memory,[12] such as consistency bias (remembering one’s past attitudes and behavior as more similar to one’s present attitudes).

Some biases reflect a subject’s motivation,[13] for example, the desire for a positive self-image leading to Egocentric bias[14] and the avoidance of unpleasant cognitive dissonance. Other biases are due to the particular way the brain perceives, forms memories and makes judgments. This distinction is sometimes described as “Hot cognition” versus “Cold Cognition”, as motivated reasoning can involve a state of arousal.

Among the “cold” biases, some are due to ignoring relevant information (e.g. Neglect of probability), whereas some involve a decision or judgement being affected by irrelevant information (for example the Framing effect where the same problem receives different responses depending on how it is described) or giving excessive weight to an unimportant but salient feature of the problem (e.g., Anchoring).

The fact that some biases reflect motivation, and in particular the motivation to have positive attitudes to oneself[14] accounts for the fact that many biases are self-serving or self-directed (e.g.Illusion of asymmetric insightSelf-serving biasProjection bias). There are also biases in how subjects evaluate in-groups or out-groups; evaluating in-groups as more diverse and “better” in many respects, even when those groups are arbitrarily-defined (Ingroup biasOutgroup homogeneity bias).

Some cognitive biases belong to the subgroup of attentional biases which refer to the paying of increased attention to certain stimuli. It has been shown, for example, that people addicted to alcohol and other drugs pay more attention to drug-related stimuli. Common psychological tests to measure those biases are the Stroop Task[15][16] and the Dot Probe Task.

The following is a list of the more commonly studied cognitive biases:

For other noted biases, see List of cognitive biases.
  • Framing by using a too-narrow approach and description of the situation or issue.
  • Hindsight bias, sometimes called the “I-knew-it-all-along” effect, is the inclination to see past events as being predictable.
  • Fundamental attribution error is the tendency for people to over-emphasize personality-based explanations for behaviors observed in others while under-emphasizing the role and power of situational influences on the same behavior.
  • Confirmation bias is the tendency to search for or interpret information in a way that confirms one’s preconceptions; this is related to the concept of cognitive dissonance.
  • Self-serving bias is the tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests.
  • Belief bias is when one’s evaluation of the logical strength of an argument is biased by their belief in the truth or falsity of the conclusion.

A 2012 Psychological Bulletin article suggests that at least 8 seemingly unrelated biases can be produced by the same information-theoretic generative mechanism.[17] It is shown that noisy deviations in the memory-based information processes that convert objective evidence (observations) into subjective estimates (decisions) can produce regressive conservatism, the conservatism (Bayesian)illusory correlationsbetter-than-average effect and worse-than-average effectsubadditivity effectexaggerated expectationoverconfidence, and the hard–easy effect.

However, as recent research has demonstrated,, even scientists who adhere to the scientific method can’t guarantee they will draw the best possible conclusions. “How could such highly-educated and precisely-trained professionals veer off the path of objectivity?” The answer is simple: Being human.

As the fields of psychology and behavioral economics have demonstrated, homo sapiens is a seemingly irrational species that appears to, more often than not, think and behave in nonsensical rather than commonsensical ways. The reason is that we fall victim to a veritable laundry list of cognitive biases that cause us to engage in distorted, imprecise and incomplete thinking which, not surprisingly, results in “perceptual distortion, inaccurate judgment or illogical interpretation” (thanks Wikipedia), and, by extension, poor and sometimes catastrophic decisions.

Well-known examples of the results of cognitive biases include the Internet, the housing and financial crises of the past decade, truly stupid use of social media by politicians, celebrities and professional athletes, the existence of the $2.5 billion self-help industry, and, well, believing that a change in the controlling party in Washington will somehow change its toxic political culture.

What is interesting is that many of these cognitive biases must have had, at some point in our evolution, adaptive value. These distortions helped us to process information more quickly (e.g., stalking prey in the jungle), meet our most basic needs (e.g., help us find mates) and connect with others (e.g., be a part of a “tribe”).

The biases that helped us survive in primitive times when life was much simpler (e.g., life goal: live through the day) and speed of a decision rightfully trumped its absolute accuracy doesn’t appear to be quite as adaptive in today’s much more complex world. Due to the complicated nature of life these days, correctness of information, thoroughness of processing, precision of interpretation and soundness of judgment are, in most situations today, far more important than the simplest and fastest route to a judgment.

Unfortunately, there is no magic pill that will inoculate us from these cognitive biases. But we can reduce their power over us by understanding these distortions, looking for them in our own thinking and making an effort to counter their influence over us as we draw conclusions, make choices and come to decisions. In other words, just knowing and considering these universal biases (in truth, what most people call common sense is actually common bias) will make us less likely to fall victim to them.

Here are some of the most widespread cognitive biases that contaminate our ability to use common sense:

  • The bandwagon effect (aka herd mentality) describes the tendency to think or act in ways because other people do. Examples include the popularity of Apple products, use of “in-group” slang and clothing style and watching the “The Real Housewives of … ” reality-TV franchise.
  • The confirmation bias involves the inclination to seek out information that supports our own preconceived notions. The reality is that most people don’t like to be wrong, so they surround themselves with people and information that confirm their beliefs. The most obvious example these days is the tendency to follow news outlets that reinforce our political beliefs.
  • Illusion of control is the propensity to believe that we have more control over a situation than we actually do. If we don’t actually have control, we fool ourselves into thinking we do. Examples include rally caps in sports and “lucky” items.
  • The Semmelweis reflex (just had to include this one because of its name) is the predisposition to deny new information that challenges our established views. Sort of the yang to the yin of the confirmation bias, it exemplifies the adage “if the facts don’t fit the theory, throw out the facts.” An example is the “Seinfeld” episode in which George Costanza’s girlfriend simply refuses to allow him to break up with her.
  • The causation bias suggests the tendency to assume a cause-effect relationship in situations in which none exists (or there is a correlation or association). An example is believing someone is angry with you because they haven’t responded to your email when, more likely, they are busy and just haven’t gotten to it yet.
  • The overconfidence effect involves unwarranted confidence in one’s own knowledge. Examples include political and sports prognosticators.
  • The false consensus effect is the penchant to believe that others agree with you more than they actually do. Examples include guys who assume that all guys like sexist humor.
  • Finally, the granddaddy of all cognitive biases, the fundamental attribution error, which involves the tendency to attribute other people’s behavior to their personalities and to attribute our own behavior to the situation. An example is when someone treats you poorly, you probably assume they are a jerk, but when you’re not nice to someone, it’s because you are having a bad day.

Memory bias — Memory biases may either enhance or impair the recall of memory, or they may alter the content of what we report remembering. There are many memory …  > read more

Anchoring bias in decision-making — Anchoring or focalism is a term used in psychology to describe the common human tendency to rely too heavily, or “anchor,” on one trait or piece of …  > read more

Many of these biases are studied for how they affect belief formation and business decisions and scientific research.

  • Bandwagon effect — the tendency to do (or believe) things because many other people do (or believe) the same. Related to groupthinkcrowd psychologyherd behaviour, and manias.
  • Bias blind spot — the tendency not to compensate for one’s own cognitive biases.
  • Choice-supportive bias — the tendency to remember one’s choices as better than they actually were.
  • Confirmation bias — the tendency to search for or interpret information in a way that confirms one’s preconceptions.
  • Congruence bias — the tendency to test hypotheses exclusively through direct testing, in contrast to tests of possible alternative hypotheses.
  • Contrast effect — the enhancement or diminishment of a weight or other measurement when compared with recently observed contrasting object.
  • Déformation professionnelle — the tendency to look at things according to the conventions of one’s own profession, forgetting any broader point of view.
  • Endowment effect — “the fact that people often demand much more to give up an object than they would be willing to pay to acquire it”.[2]
  • Exposure-suspicion bias – a knowledge of a subject’s disease in a medical study may influence the search for causes.
  • Extreme aversion — most people will go to great lengths to avoid extremes. People are more likely to choose an option if it is the intermediate choice.
  • Focusing effect — prediction bias occurring when people place too much importance on one aspect of an event; causes error in accurately predicting the utility of a future outcome.
  • Framing – drawing different conclusions from the same information, depending on how that information is presented.
  • Hyperbolic discounting — the tendency for people to have a stronger preference for more immediate payoffs relative to later payoffs, the closer to the present both payoffs are.
  • Illusion of control — the tendency for human beings to believe they can control or at least influence outcomes that they clearly cannot.
  • Impact bias — the tendency for people to overestimate the length or the intensity of the impact of future feeling states.
  • Information bias — the tendency to seek information even when it cannot affect action.
  • Irrational escalation — the tendency to make irrational decisions based upon rational decisions in the past or to justify actions already taken.
  • Loss aversion — “the disutility of giving up an object is greater than the utility associated with acquiring it”.[3] (see also sunk cost effects and Endowment effect).
  • Neglect of probability — the tendency to completely disregard probability when making a decision under uncertainty.
  • Mere exposure effect — the tendency for people to express undue liking for things merely because they are familiar with them.
  • Obsequiousness bias – the tendency to systematically alter responses in the direction they perceive desired by the investigator.
  • Omission bias — the tendency to judge harmful actions as worse, or less moral, than equally harmful omissions (inactions).
  • Outcome bias — the tendency to judge a decision by its eventual outcome instead of based on the quality of the decision at the time it was made.
  • Planning fallacy — the tendency to underestimate task-completion times. Also formulated as Hofstadter’s Law: “It always takes longer than you expect, even when you take into account Hofstadter’s Law.”
  • Post-purchase rationalization — the tendency to persuade oneself through rational argument that a purchase was a good value.
  • Pseudocertainty effect — the tendency to make risk-averse choices if the expected outcome is positive, but make risk-seeking choices to avoid negative outcomes.
  • Reactance – the urge to do the opposite of what someone wants you to do out of a need to resist a perceived attempt to constrain your freedom of choice.
  • Selective perception — the tendency for expectations to affect perception.
  • Status quo bias — the tendency for people to like things to stay relatively the same (see also Loss aversion and Endowment effect).[4]
  • Unacceptability bias – questions that may embarrass or invade privacy are refused or evaded.
  • Unit bias — the tendency to want to finish a given unit of a task or an item with strong effects on the consumption of food in particular
  • Von Restorff effect — the tendency for an item that “stands out like a sore thumb” to be more likely to be remembered than other items.
  • Zero-risk bias — the preference for reducing a small risk to zero over a greater reduction in a larger risk. It is relevant e.g. to the allocation of public health resources and the debate about nuclear power.

Many of these biases are often studied for how they affect business and economic decisions and how they affect experimental research.

  • Ambiguity effect — the avoidance of options for which missing information makes the probability seem “unknown”.
  • Anchoring — the tendency to rely too heavily, or “anchor,” on a past reference or on one trait or piece of information when making decisions.
  • Anthropic bias — the tendency for one’s evidence to be biased by observation selection effects.
  • Attentional bias — neglect of relevant data when making judgments of a correlation or association.
  • Availability heuristic — a biased prediction, due to the tendency to focus on the most salient and emotionally-charged outcome.
  • Clustering illusion — the tendency to see patterns where actually none exist.
  • Conjunction fallacy — the tendency to assume that specific conditions are more probable than general ones.
  • Gambler’s fallacy — the tendency to assume that individual random events are influenced by previous random events. For example, “I’ve flipped heads with this coin five times consecutively, so the chance of tails coming out on the sixth flip is much greater than heads.”
  • Hindsight bias — sometimes called the “I-knew-it-all-along” effect: the inclination to see past events as being predictable, based on knowledge of later events.
  • Hostile media effect — the tendency to perceive news coverage as biased against your position on an issue.
  • Illusory correlation — beliefs that inaccurately suppose a relationship between a certain type of action and an effect.
  • Ludic fallacy — the analysis of chance related problems with the narrow frame of games. Ignoring the complexity of reality, and the non-gaussian distribution of many things.
  • Neglect of prior base rates effect — the tendency to fail to incorporate prior known probabilities which are pertinent to the decision at hand.
  • Observer-expectancy effect — when a researcher expects a given result and therefore unconsciously manipulates an experiment or misinterprets data in order to find it (see also subject-expectancy effect).
  • Optimism bias — the systematic tendency to be over-optimistic about the outcome of planned actions. Found to be linked to the “left inferior frontal gyrus” section of the brain, and disrupting this section of the brain removes the bias. Article summarising this finding
  • Overconfidence effect — the tendency to overestimate one’s own abilities.
  • Positive outcome bias — a tendency in prediction to overestimate the probability of good things happening to them (see also wishful thinking, optimism bias and valence effect).
  • Primacy effect — the tendency to weigh initial events more than subsequent events.
  • Recency effect — the tendency to weigh recent events more than earlier events (see also ‘peak-end rule’).
  • Reminiscence bump — the effect that people tend to recall more personal events from adolescence and early adulthood than from other lifetime periods.
  • Rosy retrospection — the tendency to rate past events more positively than they had actually rated them when the event occurred.
  • Subadditivity effect — the tendency to judge probability of the whole to be less than the probabilities of the parts.
  • Telescoping effect — the effect that recent events appear to have occurred more remotely and remote events appear to have occurred more recently.
  • Texas sharpshooter fallacy — the fallacy of selecting or adjusting a hypothesis after the data are collected, making it impossible to test the hypothesis fairly.

Most of these biases are labeled as attributional biases.

  • Actor-observer bias — the tendency for explanations for other individual’s behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation. This is coupled with the opposite tendency for the self in that one’s explanations for their own behaviors overemphasize their situation and underemphasize the influence of their personality. (see also fundamental attribution error).
  • Dunning-Kruger effect — “…when people are incompetent in the strategies they adopt to achieve success and satisfaction, they suffer a dual burden: Not only do they reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the ability to realize it. Instead, …they are left with the mistaken impression that they are doing just fine.”[5] (See also the Lake Wobegon effect, and overconfidence effect).
  • Egocentric bias — occurs when people claim more responsibility for themselves for the results of a joint action than an outside observer would.
  • Forer effect (aka Barnum Effect) — the tendency to give high accuracy ratings to descriptions of their personality that supposedly are tailored specifically for them, but are in fact vague and general enough to apply to a wide range of people. For example, horoscopes.
  • False consensus effect — the tendency for people to overestimate the degree to which others agree with them.
  • Fundamental attribution error — the tendency for people to over-emphasize personality-based explanations for behaviors observed in others while under-emphasizing the role and power of situational influences on the same behavior (see also actor-observer bias, group attribution error, positivity effect, and negativity effect).
  • Halo effect — the tendency for a person’s positive or negative traits to “spill over” from one area of their personality to another in others’ perceptions of them (see also physical attractiveness stereotype).
  • Herd instinct – a common tendency to adopt the opinions and follow the behaviors of the majority to feel safer and to avoid conflict.
  • Illusion of asymmetric insight — people perceive their knowledge of their peers to surpass their peers’ knowledge of them.
  • Illusion of transparency — people overestimate others’ ability to know them, and they also overestimate their ability to know others.
  • Ingroup bias — the tendency for people to give preferential treatment to others they perceive to be members of their own groups.
  • Just-world phenomenon — the tendency for people to believe that the world is “just” and therefore people “get what they deserve.”
  • Lake Wobegon effect — the human tendency to report flattering beliefs about oneself and believe that one is above average (see also worse-than-average effect, and overconfidence effect).
  • Notational bias — a form of cultural bias in which a notation induces the appearance of a nonexistent natural law.
  • Outgroup homogeneity bias — individuals see members of their own group as being relatively more varied than members of other groups.
  • Projection bias — the tendency to unconsciously assume that others share the same or similar thoughts, beliefs, values, or positions.
  • Self-serving bias — the tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests (see also group-serving bias).
  • Self-fulfilling prophecy — the tendency to engage in behaviors that elicit results which will (consciously or subconsciously) confirm our beliefs.
  • System justification — the tendency to defend and bolster the status quo, i.e. existing social, economic, and political arrangements tend to be preferred, and alternatives disparaged sometimes even at the expense of individual and collective self-interest.
  • Trait ascription bias — the tendency for people to view themselves as relatively variable in terms of personality, behavior and mood while viewing others as much more predictable.
  • Beneffectance – perceiving oneself as responsible for desirable outcomes but not responsible for undesirable ones. (Term coined by Greenwald (1980))
  • Consistency bias– incorrectly remembering one’s past attitudes and behaviour as resembling present attitudes and behaviour.
  • Cryptomnesia – a form of misattribution where a memory is mistaken for imagination.
  • Egocentric bias – recalling the past in a self-serving manner, e.g. remembering one’s exam grades as being better than they were, or remembering a caught fish as being bigger than it was
  • Confabulation or false memory – Remembering something that never actually happened.
  • Hindsight bias – filtering memory of past events through present knowledge, so that those events look more predictable than they actually were; also known as the ‘I-knew-it-all-along effect’.
  • Selective Memory and selective reporting
  • Suggestibility – a form of misattribution where ideas suggested by a questioner are mistaken for memory. Often a key aspect of hypnotherapy.

 

the demarcation problem

What is Pseudoscience? Distinguishing between science and pseudoscience is problematic Michael Shermer CLIMATE DENIERS ARE ACCUSED OF PRACTICING PSEUDOSCIENCE, as are intelligent design creationists, astrologers, UFOlogists, parapsychologists, practitioners of alternative medicine, and often anyone who strays far from the scientific mainstream. The boundary problem between science and pseudoscience, in fact, is notoriously fraught with definitional […]

What is Pseudoscience?

Distinguishing between science and pseudoscience is problematic

Michael Shermer

CLIMATE DENIERS ARE ACCUSED OF PRACTICING PSEUDOSCIENCE, as are intelligent design creationists, astrologers, UFOlogists, parapsychologists, practitioners of alternative medicine, and often anyone who strays far from the scientific mainstream. The boundary problem between science and pseudoscience, in fact, is notoriously fraught with definitional disagreements because the categories are too broad and fuzzy on the edges, and the term “pseudoscience” is subject to adjectival abuse against any claim one happens to dislike for any reason. In his 2010 book Nonsense on Stilts (University of Chicago Press), philosopher of science Massimo Pigliucci concedes that there is “no litmus test,” because “the boundaries separating science, nonscience, and pseudoscience are much fuzzier and more permeable than Popper (or, for that matter, most scientists) would have us believe.”

It was Karl Popper who first identified what he called “the demarcation problem” of finding a criterion to distinguish between empirical science, such as the successful 1919 test of Einstein’s general theory of relativity, and pseudoscience, such as Freud’s theories, whose adherents sought only confirming evidence while ignoring disconfirming cases. Einstein’s theory might have been falsified had solar-eclipse data not shown the requisite deflection of starlight bent by the sun’s gravitational field. Freud’s theories, however, could never be disproved, because there was no testable hypothesis open to refutability. Thus, Popper famously declared “falsifiability” as the ultimate criterion of demarcation.

Baloney Detection Kit

THE TEN QUESTIONS How reliable is the source of the claim? Does the source make similar claims? Have the claims been verified by somebody else? Does this fit with the way the world works? Has anyone tried to disprove the claim? Where does the preponderance of evidence point? Is the claimant playing by the rules […]

THE TEN QUESTIONS

  1. How reliable is the source of the claim?
  2. Does the source make similar claims?
  3. Have the claims been verified by somebody else?
  4. Does this fit with the way the world works?
  5. Has anyone tried to disprove the claim?
  6. Where does the preponderance of evidence point?
  7. Is the claimant playing by the rules of science?
  8. Is the claimant providing positive evidence?
  9. Does the new theory account for as many phenomena as the old theory?
  10. Are personal beliefs driving the claim?

CREDITS

This is the first video by RDFTV.
Presented by The Richard Dawkins Foundation for Reason and Science
Directed by Josh Timonen
Produced by Maureen Norton
Animation by Pew 36 Animation Studios
Music by Neal Acree
Post Production Sound by Sound Satisfaction
Supervising Sound Editor/Re-Recording Mixer: Gary J. Coppola, C.A.S.
Sound Editor: Ben Rauscher
Production Assistant: Graham Immel
Copyright © 2009 Upper Branch Productions, Inc.

Baloney Detection

How to draw boundaries between science and pseudoscience,

By Michael Shermer

When lecturing on science and pseudoscience at colleges and universities, I am inevitably asked, after challenging common beliefs held by many students, “Why should we believe you?” My answer: “You shouldn’t.”

I then explain that we need to check things out for ourselves and, short of that, at least to ask basic questions that get to the heart of the validity of any claim. This is what I call baloney detection, in deference to Carl Sagan, who coined the phrase “Baloney Detection Kit.” To detect baloney–that is, to help discriminate between science and
pseudoscience–I suggest 10 questions to ask when encountering any claim.

1. How reliable is the source of the claim?

Pseudoscientists often appear quite reliable, but when examined closely, the facts and figures they cite are distorted, taken out of context or occasionally even fabricated. Of course, everyone makes some mistakes. And as historian of science Daniel Kevles showed so effectively in his book The Baltimore Affair, it can be hard to detect a fraudulent signal within the background noise of sloppiness that is a normal part of the scientific process. The question is, Do the data and interpretations show signs of intentional distortion? When an independent committee established to investigate potential fraud scrutinized a set of research notes in Nobel laureate David Baltimore’s laboratory, it revealed a surprising number of mistakes. Baltimore was exonerated because his lab’s mistakes were random and nondirectional.

2. Does this source often make similar claims?

Pseudoscientists have a habit of going well beyond the facts. Flood geologists (creationists who believe that Noah’s flood can account for many of the earth’s geologic formations) consistently make outrageous claims that bear no relation to geological science. Of course, some great thinkers do frequently go beyond the data in their creative speculations.

Thomas Gold of Cornell University is notorious for his radical ideas, but he has been right often enough that other scientists listen to what he has to say. Gold proposes, for example, that oil is not a fossil fuel at all but the by-product of a deep, hot biosphere (microorganisms living at unexpected depths within the crust). Hardly any earth scientists with whom I have spoken think Gold is right, yet they do not consider him a crank. Watch out for a pattern of fringe thinking that consistently ignores or distorts data.

3. Have the claims been verified by another source?

Typically pseudoscientists make statements that are unverified or verified only by a source within their own belief circle. We must ask, Who is checking the claims, and even who is checking the checkers? The biggest problem with the cold fusion debacle, for instance, was not that Stanley Pons and Martin Fleischman were wrong. It was that they announced their spectacular discovery at a press conference before other laboratories verified it. Worse, when cold fusion was not replicated, they continued to cling to their claim. Outside verification is crucial to good science.

4. How does the claim fit with what we know about how the world works?

An extraordinary claim must be placed into a larger context to see how it fits. When people claim that the Egyptian pyramids and the Sphinx were built more than 10,000 years ago by an unknown, advanced race, they are not presenting any context for that earlier civilization. Where are the rest of the artifacts of those people? Where are their works of art, their weapons, their clothing, their tools, their trash? Archaeology simply does not operate this way.

5. Has anyone gone out of the way to disprove the claim, or has only supportive evidence been sought?

This is the confirmation bias, or the tendency to seek confirmatory evidence and to reject or ignore disconfirmatory evidence. The confirmation bias is powerful, pervasive and almost impossible for any of us to avoid. It is why the methods of science that emphasize checking and rechecking, verification and replication, and especially attempts to falsify a claim, are so critical.

bias influencing results

Harvard biologist and popular author Stephen Jay Gould was a well-known advocate for evolution and denouncer of scientific bias. But a new study shows that one of his most famous claims—that an early researcher unconsciously manipulated his measurements of skulls to make Caucasians seem smarter—is baseless. The researcher actually made few errors, and it looks […]

Harvard biologist and popular author Stephen Jay Gould was a well-known advocate for evolution and denouncer of scientific bias. But a new study shows that one of his most famous claims—that an early researcher unconsciously manipulated his measurements of skulls to make Caucasians seem smarter—is baseless.

The researcher actually made few errors, and it looks like Gould never bothered to measure the skulls himself, as the study’s authors did, before crying bias. “Ironically,” the authors write, “Gould’s own analysis…is likely the stronger example of a bias influencing results.”

  • Gould’s influential 1981 book, The Mismeasure of Man, asserts that Samuel George Morton, a 19th-century anthropologist, fudged his measurements and analysis of 100 human skulls to support his hypothesis that brain volume would be larger in Caucasians. It’s now a textbook example of how unconscious bias can sway the results of a study.
  • The team went back and measure Morton’s skulls themselves. What they found was that very few of his measurements were off, and the errors he had made actually contradicted his hypothesis that Caucasian brains would be larger. Nature News’ Great Beyond provides the team’s full list of mistakes Gould made in his analysis.
  • This isn’t the first time that scientists have looked into Gould’s assertion and found it lacking, writes the NYTimes:

    An earlier study by John S. Michael, then an undergraduate at Penn, concluded that Morton’s results were “reasonably accurate,” with no clear sign of manipulation.

  • But because it was the work of an undergraduate, the scientific community did not immediately accept the conclusion. “It is not entirely evident that one should prefer the measurements of an undergraduate to those of professional paleontologist,” [Philip Kitcher, a philosopher of science at Columbia] wrote in 2004 (via NYTimes). “Pending further measurement of the skulls and further analysis of the data, it seems best to let this grubby affair rest in a footnote.”