Ó 2004, S. D. Cochran. All rights reserved.
RESEARCH DESIGN
The basic technique in research design is comparison. Through comparison we can answer questions that imply a contrast.
Example: Are depressed people more likely to commit suicide than nondepressed people?
The ways in which we make these comparisons differ:
- We can compare individuals to themselves over time
- We can compare one group to another group
- We can contrast our sample statistic with our knowledge of the population parameter
- We can contrast our sample with our knowledge of other samples that were studied at some other time or other populations
Research designs also vary in whether we attempt to manipulate what we are observing (these are called experiments) or we simply observe the phenomena (called observational studies)
Definition: An experiment is when a researcher imposes a treatment on elements in the sample (such as persons, animals, neighborhoods, etc.) and then observes or measures a response
Experiments can examine cause-effect relationships
In an ideal world, the response observed is completely due to the treatment--but in the real world the response observed is due to many things only part of which is the treatment.
Example: In an experiment, a researcher offers to pay you $5 (the treatment) if you can wiggle your ears. You do so (the response). What contributed to researcher recording your ears as wiggling? A portion of the cause can be allocated to being paid (treatment effect), but some is also due to ability, how you feel that day, whether or not you understood what was being asked of you, whether or not the researcher saw your ears move when you did it (exogenous factors).
Because responses are these combinations of treatment effects and exogenous factors, the only means we have of finding treatment effects is to somehow remove the influence of exogenous factors from our measurements--Efforts to remove the effects of exogenous factors are referred to as control.
There are several methods used to achieve control:
- One method is to apply the treatment to only some of the elements in the sample and then contrast their responses with those who don't get the treatment--the latter are referred to as control subjects and they are part of a control group
- Deciding who is in the treatment group and who is in the control group is called assignment, and each group is called a condition
- If all elements in the sample have an equal chance of being assigned to a condition (e.g., treatment group, control group), then it is random assignment--an experiment with random assignment is said to be randomized controlled
- If elements have different chances of being assigned to one group or another then control over the exogenous factors is lost (if we don't have a sense of what these difference 'chances' are)
Example: We want to run an experiment to see if winning in sports improves absorption of statistics information. Divide the class into two teams to play football. How should we choose sides so that it is fair? Flip a coin? Athletes on one side; everyone else on the other? Why is a coin flip fair? Athlete assignment is an example of confounding.
- A second method is to adjust for exogenous differences through statistical control, that is mathematically removing the estimated effects of the exogenous factors from the measured response--this is referred to as being statistically controlled for.
- Experimental designs also use other methods to improve control over the score that is obtained when the response is measured
- Researchers try to keep subjects and those who administer the treatments in the dark about what their response should be
- People's expectations have very subtle but profound effects on the world. By keeping subjects blind to which treatment condition they are in, researchers attempt to remove these expectancy effects
- Experimenters also are influenced by expectancy effects--some designs keep them unaware of treatment conditions--when both subjects and experimenters do not know the assigned condition it is called double-blinding
- Researchers also replicate their experiments (repeat the study in the same way or with only minor variations) to see if their results remain the same
Definition: In an observational study the researcher does not assign subjects to conditions--the conditions already exist--and the researcher makes comparisons among the elements. Surveys, such as a Gallop poll of voting intentions, are an example of observational studies.
Observational studies can determine association. That is, you can find that two things might be related to each other, but they cannot generally determine causation
Example: We collect a sample of cars from the student parking lots up at the dorm and measure whether or not the horn sounds when we put the key in the ignition and turn it part way. We divide cars into two groups--in one group the horn sounds; in the other the horn doesn't sound. We then measure if the radio plays. We find that nearly every car in the horn sounds group has a radio that plays but nearly every car where the horn doesn't sound, the radio doesn't play either. We conclude? Dead horns cause dead radios? No, there is a third, unmeasured variable that is causal (dead battery)
Observational studies control for the effects of exogenous factors through comparing subsamples of elements that are relatively homogenous
Example: Going back to our study of seeing if winning in sports improves absorption of statistics information. If the class had already divided itself into two teams to play football, we might compare athletes on one team versus athletes on another, or women on one team versus women on the other, or men on one team versus men on the other. How else could we try to control for differences?
The choice between an experiment and an observational study depends sometimes on what our question is that we want to answer
- Experimental studies can determine cause-effect, and give the researcher maximum control over scores on the outcome variable of interest
Example: Does shock improve performance of rats running a maze?
- Observational studies
- Can be used to describe and to find associations
Example: What is the prevalence of marijuana use among high school students?
- Can be used to examine evidence for cause-effect associations where experiments are impossible
Example: Does an earthquake in Southern California increase the likelihood of an earthquake in Northern California?
- Each has its limitations
- Experiments can be unethical and impractical.
Example: Does child abuse cause depression in adulthood? (Can we assign children to the abuse condition to observe what happens to them in adulthood?)
- Observational studies can be misleading when we attempt to make causal inference
Example: Recent studies from worldwide research suggest that being tall is a risk indicator for breast cancer. What could the biological link be? Is it because women in developed countries have different diets and are taller than women in developing countries?