For thousands of years, the field of philosophy known as ethics has studied the general nature of morals and morality. Morality refers to principles concerning the distinction between right and wrong (or good and bad) behavior. Thus, when concerned with the ethics of a research study, we are evaluating whether it would be right or wrong to perform the study. The evaluation is based on the costs (i.e., negative effects) and benefits (i.e., positive effects) experienced by all involved: research participants, researchers, and society (the ethical guidelines for psychologists are described in American Psychological Association, 2010).
In evaluating whether Milgram’s research was ethical or not, one important question to ask is: would you have wanted to be a participant in one of his experiments? If you answered, “no,” you probably were concerned about the distress felt by most participants as they struggled with the conflict between defying an authority figure and harming a human being. You also might have been disturbed by the possibility that some participants felt as if they had been made to look like “fools.” Neither of these harms would have occurred if deception — lying to participants about the purpose of a study — hadn’t been used. Elms (1995) described the four pieces of information deceptively withheld from the Teachers in Milgram’s study:
Number one, the experiment was a study of obedience to authority, not a study of memory and learning. Number two, the volunteer who assumed the role of learner was actually an experimental confederate. Number three, the only shock that anyone ever got was the 45 volt sample shock given to each teacher; the shock generator was not wired to give any shocks to the learner. Number four, the learner’s kicks against the wall, his screams, his refusals to continue, were all carefully scripted and rehearsed, as were the experimenter’s orders to the teacher.
But, if the participants had known this information, there could have been no study of obedience: deception was necessary if Milgram was to test his hypotheses about the situational factors involved in obeying an authority in an extreme situation (see Elms, 1982; Elms, 1998). Today, deception is used often is social-psychological research: “between one half and three quarters of published social-psychological research reports involve some element of deception” (Alcock, 2006, p. 10). The dilemma we face in using deception involves a conflict between two sets of issues:
- Ethical concerns. Research may cause people to experience distress, embarrassment, and other psychological harms;
- Scientific concerns. Research requires that hypotheses be adequately tested.
Critics of Milgram’s studies have asked questions such as the following:
- Is scientific understanding of humans such an important goal that researchers have the right to deceive people, thereby causing psychological harm?
- How can we determine if researchers have other, more selfish, goals motivating them to perform their research?
With respect to the second question, it seems likely to be true that, in some cases, researchers simply want to satisfy their career ambitions. It is conceivable that these people would choose research topics and perform studies that would draw the most attention to themselves. Of course, the more extreme and controversial the research, the more attention the researchers will receive. Although this probably wasn’t Milgram’s motivation (Blass, 2004),his studies have become very well known, not only by other behavioral scientists but also by the general public: a play and a television movie based on his research were produced; and discussions of the obedience studies are a staple of introductory-psychology courses around the world.
With respect to the use of deception in psychological research, perhaps the most important ethical problem is that people who are deceived may not be given all the information needed to decide whether or not to participate. In fact, some argue that deception conflicts with an ethical requirement that was first established during the 1970s, which states that research participants must give their informed consent to participate in a study. Informed consent involves telling potential participants about all aspects of a research study relevant to making a decision about their participation. This means that, before agreeing to participate, potential participants must be told everything that might be harmful to them (physically and psychologically). One psychologist, Baumrind (1964) argued that, rather than giving what we today call informed consent, the participants in Milgram’s research had been “entrapped” into performing actions that caused them psychological harm (also see Elms, 1995). Milgram’s participants, when they first agreed to participate, did not know that the Experimenter would command them to seriously harm, and perhaps even kill, another person. If the Experimenter had mentioned this possibility, it seems likely that most would have refused to take part in the study. The ethical problem with deception should now be obvious: are people really able to give their informed consent when they are lied to about fundamental aspects of a study?
Milgram (1964, 1974) defended himself against such criticisms by pointing to the fact that almost all subjects had a favorable view of the study after they had participated in it. Milgram argued that this was the most important factor justifying the continuation of any study:
The central moral justification for allowing a procedure of the sort used in my experiment is that it is judged acceptable by those who have taken part in it…. [W]hether it is unethical to pursue truths [by creating an illusion, thereby deceiving people] … cannot be answered in the abstract. It depends entirely on the response of those who have been exposed to such a procedure. (Milgram, 1974, p. 199)
Milgram (1974) stated that, if many of the participants had expressed their outrage at having been subjected to the experimental manipulations, he would have stopped the research. He claimed that, rather than being outraged, many participants reported that the experience had been personally meaningful: “They viewed the experience as an opportunity to learn something of importance about themselves, and more generally, about the conditions of human action” (p. 196). His point was that deception and the temporary distress that it may cause are not the moral problems some critics have claimed them to be — not when the participants themselves view the experiences as having been positive because they were (or eventually became) personally meaningful.
Furthermore, Milgram took care to reassure participants as soon as the experiment ended. According to Elms (1995):
The experimenter gave each subject a standard debriefing at the end of the hour, to minimize any continuing stress and to show that the ‘victim’ had not been injured by the ‘shocks’. When a subject appeared especially stressed, Milgram often moved out from behind the curtains to do an especially thorough job of reassurance and stress reduction.
A possible problem with Milgram’s line of reasoning involves an extension of a point made in Section 6-6: when people commit themselves to a course of action, they become motivated to think of their decision as having been the correct one (see Cialdini, 1998, for a summary of research on social commitments). Extending this idea to the present case, once people have agreed to participate in a study, they become motivated to think of their participation as having been correct and justified, even if they experienced harmful consequences. In fact, there is research demonstrating that the more distress people feel after agreeing to a course of action, the greater their motivation to think of their initial decision as having been correct (Aronson & Mills, 1959; Cialdini, 1998). It is thought that this helps to maintain self-esteem after making what might seem to have been a poor decision.
Is the motivation to maintain self-esteem the reason that almost all of Milgram’s participants evaluated the study as having been a positive experience? That is, was it just “wishful thinking” on their part or did they actually experience something deeply meaningful? It is impossible to say. It even is possible that the motivation to maintain self-esteem (or some other similar motivation) caused them to focus on the positive aspects of the study and to ignore or forget its negative aspects, which would make their experiences positive and meaningful even if this was not their original view. Thus, it may depend on how others describe the experience to them, especially authorities who help the participants to interpret their experiences afterwards. If this is true, then Milgram’s “debriefing” of participants immediately after their experimental sessions had ended, which now is an essential component of any study of humans but was novel in 1961 when Milgram started doing it, helped participants to interpret their participation positively, thereby reducing potential psychological harms.
In addition to the meaningfulness of the experience for participants, there are other benefits of research. The most important benefit is the knowledge gained from analyses and interpretations of a study’s results. In fact, because of the importance of Milgram’s findings to our knowledge of human nature, here you are, decades after his research was completed, reading about it.
In general, when evaluating the ethics of a proposed study, we must weigh its potential benefits (positive effects) against its potential costs (negative effects). If the benefits outweigh the costs to a sufficient degree, the study is judged to be ethical. But there are at least two issues that create difficulties for making this judgment:
- To what degree must the benefits outweigh the costs before we judge the study to be ethical?
- Which standards should be used to assess the amount of benefit or harm produced by a research study?
There seems to be no unambiguous answer to the first question. And, with respect to the second question, it seems obvious that each person assessing benefits and harms will rely on his or her own standards (values). Some people may feel that it is always wrong to cause another to experience distress or that scientific knowledge is not that important. Others may feel that scientific knowledge is of paramount importance or that emotional suffering is good for one’s character.
Regardless of your own judgment about the ethics of Milgram’s research, it no longer is possible to replicate his studies in the United States. Most colleges and universities in the United States have “institutional review boards” (IRBs) that consider the ethics of proposed studies. It is unlikely that an IRB would approve any study similar to Milgram’s. Some applaud this fact whereas others feel its stifling effects upon the science of psychology (see Gunsalus, et al., 2005). In fact, some researchers believe that it is impossible any longer to do any truly important research in social psychology (at least in the United States, where ethical standards seem to be the strictest).
Postscript. Burger (2009) performed a partial replication of Milgram’s study. This is an excellent article to read if you want to see how ethical considerations currently affect research in social psychology.
Study Questions for Section 6-8
- What are we assessing when we are evaluating the ethics of research?
- In what ways were the participants in Milgram’s study being deceived?
- Why is deception used in so many social psychological studies?
- What is(are) the ethical problem(s) with respect to the use of deception? (In your answer, please discuss the issue of informed consent.)
- Why did Milgram not think that his use of deception and the extreme distress experienced by many of his research participants violated ethical standards?
- With respect to the answer to the previous question, what is a problem(s) with the reason that Milgram provided when he argued that his obedience study was ethical?
- Why is it important to have debriefing sessions for participants after they have completed a research study?
- What are possible benefits of research studies for the participants and the researchers?
- How do we decide whether or not a study meets ethical standards for research?
- Why do people disagree about whether a study is ethical or not?
- What is an institutional review board (IRB)?
- Why do many research psychologists criticize IRBs and the stringent ethical standards that now are the norm in the United States?
Alcock, J. (2006, June). Psychology and ethics. Skeptical Briefs, 16(2), 10-12.
American Psychological Association. (2010, June 1). Ethical Principles of Psychologists and Code of Conduct. Retrieved November 24, 2011, from http://www.apa.org/ethics/code.html
Aronson, E., & Mills, J. (1959). The effect of severity of initiation on liking for a group. Journal of Abnormal and Social Psychology, 12, 16-27. Retrieved November 24, 2011, from http://faculty.uncfsu.edu/tvancantfort/Syllabi/Gresearch/Readings/A_Aronson.pdf
Baumrind, D. (1964). Some thoughts on ethics of research: After reading Milgram’s “Behavioral Study of Obedience.” American Psychologist, 19, 421-423. doi: 10.1037/h0040128
Retrieved November 24, 2011, from http://faculty.kent.edu/updegraffj/gradsocial/readings/baumrind.pdf
Blass, T. (2004). The man who shocked the world: The life and legacy of Stanley Milgram. New York: Basic Books.
Burger, J. M. (2009). Replicating Milgram: Would people still obey today? American Psychologist, 64, 1-11. doi: 10.1037/a0010932
Retrieved November 24, 2011, from http://www.scu.edu/cas/psychology/faculty/upload/Replicating-Milgrampdf.pdf
Cialdini, R. B. (1998). Influence: The psychology of persuasion (Revised ed.). [CITY]: Collins.
Elms, A. C. (1982). Keeping deception honest: Justifying conditions for social scientific research stratagems. In T. L. Beauchamp, R. R. Faden, R. J. Wallace, Jr., & L. Walters (Eds.), Ethical Issues in Social Science Research (pp. 232-245). Baltimore: The Johns Hopkins University Press. Retrieved November 24, 2011, from http://www.ulmus.net/ace/library/keepingdeceptionhonest.html
Elms, A. C. (1995). Obedience in retrospect. Journal of Social Issues, 51, 121-131. Retrieved November 24, 2011, from http://www.ulmus.net/ace/library/obedience.html
Elms, A. C. (1998). Experimental ethics. Excerpted from Chapter 4 of A. C. Elms (1972), Social psychology and social relevance (pp. 146-160). Retrieved November 24, 2011, from http://www.ulmus.net/ace/library/obedienceresearchethics.html
Gunsalus, C.K., Bruner, E.M., Burbules, N.C., et al. (2005). The Illinois white paper – improving the system for protecting human subjects: Counteracting IRB “Mission Creep.” In C.K. Gunsalus (Ed.), Human subject protection regulations and research outside the biomedical sphere. Urbana, IL: University of Illinois Center for Advanced Study. U Illinois Law & Economics Research Paper No. LE06-016. Retrieved November 24, 2011, from http://tinyurl.com/y8nkchb
Milgram, S. (1964). Issues in the study of obedience: A reply to Baumrind. American Psychologist, 19, 848-852. doi: 10.1037/h0044954
Milgram, S. (1974). Obedience to authority: An experimental view. New York: Harper & Row Publishing.