Respond in a minimum of 175 words:
Part1-Share a question you have that you would like to research. What is the population of interest? What type of sample would you collect for this study? Explain what guidelines you used to select that sample.
Part2-Part two is attached below
References: Chapter 1
• Explain the goals of science.
• Identify and compare descriptive methods.
• Identify and compare predictive (relational) methods.
• Describe the explanatory method. Your description should include independent variable, dependent variable, control group, and experimental group.
• Explain how we “do” science and how proof and disproof relate to doing science.
You may be wondering why you are enrolled in a statistics class. Most students take statistics because it is a requirement in their major field, and often students do not understand why it is a requirement. Scientists and researchers use statistics to describe data and draw inferences. Thus, no matter whether your major is in the behavioral sciences, the natural sciences, or in more applied areas such as business or education, statistics are necessary to your discipline. Why? Statistics are necessary because scientists and researchers collect data and test hypotheses with these data using statistics. A hypothesis is a prediction regarding the outcome of a study. This prediction concerns the potential relationship between at least two variables (a variable is an event or behavior that has at least two values). Hypotheses are stated in such a way that they are testable. When we test our hypothesis, statistics may lead us to conclude that our hypothesis is or is not supported by our observations.
hypothesis A prediction regarding the outcome of a study involving the potential relationship between at least two variables.
variable An event or behavior that has at least two values.
In science, the goal of testing hypotheses is to arrive at or test a theory—an organized system of assumptions and principles that attempts to explain certain phenomena and how they are related. Theories help us to organize and explain the data gathered in research studies. In other words, theories allow us to develop a framework regarding the facts in a certain area. For example, Darwin’s theory organizes and explains facts related to evolution. In addition to helping us organize and explain facts, theories also help in producing new knowledge by steering researchers toward specific observations of the world.
theory An organized system of assumptions and principles that attempts to explain certain phenomena and how they are related.
Students are sometimes confused about the differences between a hypothesis and a theory. A hypothesis is a prediction regarding the outcome of a single study. Many hypotheses may be tested and several research studies conducted before a comprehensive theory on a topic is put forth. Once a theory is developed, it may aid in generating future hypotheses. In other words, researchers may have additional questions regarding the theory that help them to generate new hypotheses to test. If the results from these additional studies further support the theory, we are likely to have greater confidence in the theory. However, every time we test a hypothesis, statistics are necessary.
Goals of Science
Scientific research has three basic goals: (1) to describe, (2) to predict, and (3) to explain. All of these goals lead to a better understanding of behavior and mental processes.
Description Description begins with careful observation. Behavioral scientists might describe patterns of behavior, thought, or emotions in humans. They might also describe the behavior(s) of other animals. For example, researchers might observe and describe the type of play behavior exhibited by children or the mating behavior of chimpanzees. Description allows us to learn about behavior and when it occurs. Let’s say, for example, that you were interested in the channel-surfing behavior of males and females. Careful observation and description would be needed in order to determine whether or not there were any gender differences in channel-surfing. Description allows us to observe that two events are systematically related to one another. Without description as a first step, predictions cannot be made.
description Carefully observing behavior in order to describe it.
Prediction Prediction allows us to identify the factors that indicate when an event or events will occur. In other words, knowing the level of one variable allows us to predict the approximate level of the other variable. We know that if one variable is present at a certain level, then there is a greater likelihood that the other variable will be present at a certain level. For example, if we observed that males channel-surf with greater frequency than females, we could then make predictions about how often males and females might change channels when given the chance.
prediction Identifying the factors that indicate when an event or events will occur.
Explanation Finally, explanation allows us to identify the causes that determine when and why a behavior occurs. In order to explain a behavior, we need to demonstrate that we can manipulate the factors needed to produce or eliminate the behavior. For example, in our channel-surfing example, if gender predicts channel-surfing, what might cause it? It could be genetic or environmental. Maybe males have less tolerance for commercials and thus channel-surf at a greater rate. Maybe females are more interested in the content of commercials and are thus less likely to change channels. Maybe the attention span of females is greater. Maybe something associated with having a Y chromosome increases channel-surfing, or something associated with having two X chromosomes leads to less channel-surfing. Obviously the possible explanations are numerous and varied. As scientists, we test these possibilities to identify the best explanation of why a behavior occurs. When we try to identify the best explanation for a behavior, we must systematically eliminate any alternative explanations. To eliminate alternative explanations, we must impose control over the research situation. We will discuss the concepts of control and alternative explanations shortly.
explanation Identifying the causes that determine when and why a behavior occurs.
An Introduction to Research Methods in Science
The goals of science map very closely onto the research methods that scientists use. In other words, there are methods that are descriptive in nature, predictive in nature, and explanatory in nature. I will briefly introduce these methods here.
Descriptive Methods
Behavioral scientists use three types of descriptive methods. First is the observational method—simply making observations of human or other animal behavior. Scientists approach observation in two ways. Naturalistic observation involves observing humans or other animals behave in their natural habitat. Observing the mating behavior of chimpanzees in their natural setting would be an example of this approach. Laboratory observation involves observing behavior in a more contrived and controlled situation, usually the laboratory. Bringing children to a laboratory playroom to observe play behavior would be an example of this approach. Observation involves description at its most basic level. One advantage of the observational method, as well as other descriptive methods, is the flexibility to change what one is studying. A disadvantage of descriptive methods is that the researcher has little control. As we use more powerful methods, we gain control but lose flexibility.
observational method Making observations of human or other animal behavior.
A second descriptive method is the case study method. A case study is an in-depth study of one or more individuals. Freud used case studies to develop his theory of personality development. Similarly, Jean Piaget used case studies to develop his theory of cognitive development in children. This method is descriptive in nature, as it involves simply describing the individual(s) being studied.
case study method An in-depth study of one or more individuals.
The third method that relies on description is the survey method—questioning individuals on a topic or topics and describing their responses. Surveys can be administered by mail, over the phone, on the Internet, or as a personal interview. One advantage of the survey method over the other descriptive methods is that it allows researchers to study larger groups of individuals more easily. This method has disadvantages, however. One concern has to do with the wording of questions. Are they easy to understand? Are they written in such a manner that they bias the respondents’ answers? Such concerns relate to the validity of the data collected. Another concern relevant to the survey method (and most other research methods) is whether the group of people who participate in the study (the sample) is representative of all the people about whom the study is meant to generalize (the population). This concern can usually be overcome through random sampling. A random sample is achieved when, through random selection, each member of the population is equally likely to be chosen as part of the sample.
survey method Questioning individuals on a topic or topics and then describing their responses.
sample The group of people who participate in a study.
population All of the people about whom a study is meant to generalize.
Predictive (Relational) Methods
Two methods allow researchers to not only describe behaviors but also predict from one variable to another. The first, the correlational method, assesses the degree of relationship between two measured variables. If two variables are correlated with each other, we can predict from one variable to the other with a certain degree of accuracy. For example, height and weight are correlated. The relationship is such that an increase in one variable (height) is generally accompanied by an increase in the other variable (weight). Knowing this, we can predict an individual’s approximate weight, with a certain degree of accuracy, given the person’s height.
correlational method A method that assesses the degree of relationship between two variables.
One problem with correlational research is that it is often misinterpreted. Frequently, people assume that because two variables are correlated, there must be some sort of causal relationship between the variables. This is not so. Correlation does not imply causation. Remember that a correlation simply means that the two variables are related in some way. For example, being a certain height does not cause you to also be a certain weight. It would be nice if it did, because then we would not have to worry about being either under- or overweight. What if I told you that watching violent TV and displaying aggressive behavior were correlated? What could you conclude based on this correlation? Many people might conclude that watching violent TV causes one to act more aggressively. Based on the evidence given (a correlational study), however, we cannot draw this conclusion. All we can conclude is that those who watch more violent television programs also tend to act more aggressively. It is possible that the violent TV causes aggression, but we cannot draw this conclusion based only on correlational data. It is also possible that those who are aggressive by nature are attracted to more violent television programs, or that some other variable is causing both aggressive behavior and violent TV watching. The point is that observing a correlation between two variables simply means that they are related to each other.
The correlation between height and weight, or violent TV and aggressive behavior, is a positive relationship: As one variable (height) increases, we observe an increase in the second variable (weight). Some correlations indicate a negative relationship: As one variable increases, the other variable systematically decreases. Can you think of an example of a negative relationship between two variables? Consider this: As mountain elevation increases, temperature decreases. Negative correlations also allow us to predict from one variable to another. If I know the mountain elevation, it will help me predict the approximate temperature.
positive relationship A relationship between two variables in which an increase in one variable is accompanied by an increase in the other variable.
negative relationship A relationship between two variables in which an increase in one variable is accompanied by a decrease in the other variable.
Besides the correlational method, a second method that allows us to describe and predict is the quasi-experimental method. Quasi-experimental research allows us to compare naturally occurring groups of individuals. For example, we could examine whether alcohol consumption by students in a fraternity or sorority differs from that of students not in such organizations. You will see in a moment that this method differs from the experimental method, described below, in that the groups studied occur naturally. In other words, we do not assign people to join a Greek organization or not. They have chosen their groups on their own, and we are simply looking for differences (in this case, in the amount of alcohol typically consumed) between these naturally occurring groups. This is often referred to as a subject or participant variable—a characteristic inherent in the participants that cannot be changed. Because we are using groups that occur naturally, any differences that we find may be due to the variable of being a Greek member or not, or the differences may be due to other factors that we were unable to control in this study. For example, maybe those who like to drink more are also more likely to join a Greek organization. Once again, if we find a difference between these groups in amount of alcohol consumed, we can use this finding to predict what type of student (Greek or non-Greek) is likely to drink more. However, we cannot conclude that belonging to a Greek organization causes one to drink more because the participants came to us after choosing to belong to these organizations. In other words, what is missing when we use predictive methods such as the correlational and quasi-experimental methods is control.
quasi-experimental method Research that compares naturally occurring groups of individuals; the variable of interest cannot be manipulated.
When using predictive methods, we do not systematically manipulate the variables of interest; we only measure them. This means that, although we may observe a relationship between variables (such as that described between drinking and Greek membership), we cannot conclude that it is a causal relationship. Why? Because there could be other, alternative explanations for this relationship. An alternative explanation is the idea that it is possible that some other, uncontrolled, extraneous variable may be responsible for the observed relationship. For example, maybe those who choose to join Greek organizations come from higher-income families and have more money to spend on such things as alcohol. Or maybe those who choose to join Greek organizations are more interested in socialization and drinking alcohol before they even join the organization. Thus, because these methods leave the possibility for alternative explanations, we cannot use them to establish cause-and-effect relationships.
alternative explanation The idea that it is possible that some other, uncontrolled, extraneous variable may be responsible for the observed relationship.
Explanatory Method
When using the experimental method, researchers pay a great deal of attention to eliminating alternative explanations by using the proper controls. Because of this, the experimental method allows researchers not only to describe and predict but also to determine whether there is a cause-and-effect relationship between the variables of interest. In other words, this method enables researchers to know when and why a behavior occurs. Many preconditions must be met in order for a study to be experimental in nature. Here, we will simply consider the basics—the minimum requirements needed for an experiment.
experimental method A research method that allows a researcher to establish a cause-and-effect relationship through manipulation of a variable and control of the situation.
The basic premise of experimentation is that the researcher controls as much as possible in order to determine whether there is a cause-and-effect relationship between the variables being studied. Let’s say, for example, that a researcher is interested in whether cell phone use while driving affects driving performance. The idea behind experimentation is that the researcher manipulates at least one variable (known as the independent variable) and measures at least one variable (known as the dependent variable). In our study, what should the researcher manipulate? If you identified the use of cell phones while driving, then you are correct. If cell phone use while driving is the independent variable, then driving performance is the dependent variable. For comparative purposes, the independent variable has to have at least two groups or conditions. We typically refer to these two groups or conditions as the control group and the experimental group. The control group is the group that serves as the baseline or “standard” condition. In our study of cell phone use while driving, the control group is the group that does not use a cell phone use while driving. The experimental group is the group that receives the treatment—in this case, those who use cell phones while driving. Thus, in an experiment, one thing that we control is the level of the independent variable that participants receive.
independent variable The variable in a study that is manipulated by the researcher.
dependent variable The variable in a study that is measured by the researcher.
control group The group of participants that does not receive any level of the independent variable and serves as the baseline in a study.
experimental group The group of participants that receives some level of the independent variable.
What else should we control to help eliminate alternative explanations? Well, we need to control the type of subjects in each of the treatment conditions. We should begin by drawing a random sample of subjects from the population. Once we have our sample of subjects, we have to decide who will serve in the control group versus the experimental group. In order to gain as much control as possible, and eliminate as many alternative explanations as possible, we should use random assignment—assigning participants to conditions in such a way that every subject has an equal probability of being placed in any condition. How does random assignment help us to gain control and eliminate alternative explanations? By using random assignment we should minimize or eliminate differences between the groups. In other words, we want the two groups of participants to be as alike as possible. The only difference we want between the groups is that of the independent variable we are manipulating—either using or not using cell phones while driving. Once participants are assigned to conditions, we measure driving performance for subjects in each condition using a driving simulator (the dependent variable). Studies such as this one have already been completed by researchers. What researchers have found is that cell phone use while driving has a negative effect on driving performance (Beede & Kass, 2006; Dula, Martin, Fox, & Leonard, 2011).
random assignment Assigning participants to conditions in such a way that every participant has an equal probability of being placed in any condition.
Let’s review some of the controls we have used in the present study. We have controlled who is in the study (we want a sample representative of the population about whom we are trying to generalize), who participates in each group (we should randomly assign participants to the two conditions), and the treatment each group receives as part of the study (some drive while using a cell phone and some do not). Can you identify other variables that we might need to consider controlling in the present study? How about past driving record, how long subjects have driven, age, and their proficiency with cell phones? There are undoubtedly other variables we would need to control if we were to complete this study. The basic idea is that when using the experimental method, we try to control as much as possible by manipulating the independent variable and controlling any other extraneous variables that could affect the results of the study. Randomly assigning participants also helps to control for subject differences between the groups. What does all of this control gain us? If, after completing this study with the proper controls, we find that those in the experimental group (those who drove while using a cell phone) did in fact have lower driving performance scores than those in the control group, we would have evidence supporting a cause-and-effect relationship between these variables. In other words, we could conclude that driving while using a cell phone negatively impacts driving performance.
control Manipulating the independent variable in an experiment or any other extraneous variables that could affect the results of a study.
AN INTRODUCTION TO RESEARCH METHODS
Goal MetResearch MethodsAdvantages/DisadvantagesDescriptionObservational methodDescriptive methods allow description of behavior(s)Case study methodDescriptive methods do not support reliable predictionsSurvey methodDescriptive methods do not support cause-and-effect explanationsPredictionCorrelational methodPredictive methods allow description of behavior(s)Quasi-experimental methodPredictive methods support reliable predictions from one variable to another
Predictive methods do not support cause-and-effect explanationsExplanationExperimental methodAllows description of behavior(s)
Supports reliable predictions from one variable to another
Supports cause-and-effect explanations
a. What is the independent variable?
b. What is the dependent variable?
c. Is the independent variable a participant variable or a true manipulated variable?
a. What percentage of cars run red lights?
b. Do student athletes spend as much time studying as student nonathletes?
c. Is there a relationship between type of punishment used by parents and aggressiveness in children?
d. Do athletes who are randomly assigned to a group using imagery techniques perform better than those who are randomly assigned to a group not using such techniques?
Doing Science
Although the experimental method can establish a cause-and-effect relationship, most researchers would not wholeheartedly accept a conclusion from only one study. Why is that? Any one of a number of problems can occur in a study. For example, there may be control problems. Researchers may believe they have controlled for everything but miss something, and the uncontrolled factor may affect the results. In other words, a researcher may believe that the manipulated independent variable caused the results when, in reality, it was something else.
Another reason for caution in interpreting experimental results is that a study may be limited by the technical equipment available at the time. For example, in the early part of the 19th century, many scientists believed that studying the bumps on a person’s head allowed them to know something about the internal mind of the individual being studied. This movement, known as phrenology, was popularized through the writings of physician Joseph Gall (1758–1828). At the time that it was popular, phrenology appeared very “scientific” and “technical.” With hindsight and with the technological advances that we have today, the idea of phrenology seems laughable to us now.
Finally, we cannot completely rely on the findings of one study because a single study cannot tell us everything about a theory. The idea of science is that it is not static; the theories generated through science change. For example, we often hear about new findings in the medical field, such as “Eggs are so high in cholesterol that you should eat no more than two a week.” Then, a couple of years later, we might read, “Eggs are not as bad for you as originally thought. New research shows that it is acceptable to eat them every day,” followed a few years later by even more recent research indicating that “two eggs a day are as bad for you as smoking cigarettes every day” (Spence, Jenkins, & Davignon, 2012). You may have heard people confronted with such contradictory findings complain, “Those doctors, they don’t know what they’re talking about. You can’t believe any of them. First they say one thing, and then they say completely the opposite. It’s best to just ignore all of them.” The point is that when testing a theory scientifically, we may obtain contradictory results. These contradictions may lead to new, very valuable information that subsequently leads to a theoretical change. Theories evolve and change over time based on the consensus of the research. Just because a particular idea or theory is supported by data from one study does not mean that the research on that topic ends and that we just accept the theory as it currently stands and never do any more research on that topic.
Proof and Disproof
When scientists test theories, they do not try to prove them true. Theories can be supported based on the data collected, but obtaining support for something does not mean it is true in all instances. Proof of a theory is logically impossible. As an example, consider the following problem, adapted from Griggs and Cox (1982). This is known as the Drinking Age Problem (the reason for the name will become readily apparent).
On this task imagine that you are a police officer responsible for making sure the drinking-age rule is being followed. The four cards below represent information about four people sitting at a table. One side of a card indicates what the person is drinking and the other side of the card indicates the person’s age. The rule is: “If a person is drinking alcohol, then the person is 21 or over.” In order to check that the rule is true or false, which card or cards below would you turn over? Turn over only the card or cards that you need to check to be sure.
Does turning over the beer card and finding that the person is 21 years of age or older prove that the rule is always true? No—the fact that one person is following the rule does not mean that it is always true. How, then, do we test a hypothesis? We test a hypothesis by attempting to falsify or disconfirm it. If it cannot be falsified, then we say we have support for it. Which cards would you choose in an attempt to falsify the rule in the drinking age problem? If you identified the beer card as being able to falsify the rule, then you were correct. If we turn over the beer card and find that the individual is under 21 years of age, then the rule is false. Is there another card that could also falsify the rule? Yes, the 16 years of age card can. How? If we turn that card over and find that the individual is drinking alcohol, then the rule is false. These are the only two cards that can potentially falsify the rule. Thus, they are the only two cards that need to be turned over.
Even though disproof or disconfirmation is logically sound in terms of testing hypotheses, falsifying a hypothesis does not always mean that the hypothesis is false. Why? There may be design problems in the study, as described earlier. Thus, even when a theory is falsified, we need to be cautious in our interpretation. We do not want to completely discount a theory based on a single study.
REVIEW OF KEY TERMS
alternative explanation (p. 6)
case study method (p. 4)
control (p. 8)
control group (p. 8)
correlational method (p. 5)
dependent variable (p. 7)
description (p. 3)
experimental group (p. 7)
experimental method (p. 6)
explanation (p. 3)
hypothesis (p. 2)
independent variable (p. 7)
negative relationship (p. 5)
observational method (p. 4)
population (p. 5)
positive relationship (p. 5)
prediction (p. 3)
quasi-experimental method (p. 6)
random assignment (p. 7)
sample (p. 5)
survey method (p. 4)
theory (p. 2)
variable (p. 2)
MODULE EXERCISES
(Answers to odd-numbered questions appear in Appendix B.)
a. What is the independent variable in this study?
b. What is the dependent variable in this study?
c. Identify the control and experimental groups in this study.
d. Is the independent variable manipulated or a participant variable?
a. What is the independent variable in this study?
b. What is the dependent variable in this study?
c. Identify the control and experimental groups in this study.
d. Is the independent variable manipulated or a participant variable?
a. What is the independent variable in this study?
b. What is the dependent variable in this study?
c. Identify the control and experimental groups in this study.
d. Is the independent variable manipulated or a participant variable?
CRITICAL THINKING CHECK ANSWERS
Critical Thinking Check 1.1
variables are related in an inverse manner. That is, those with psychological disorders also tend to have lower income levels.
b. The dependent variable is life satisfaction.
c. The independent variable is a participant variable.
b. Quasi-experimental method
c. Correlational method
d. Experimental method
MODULE 2
Variables and Measurement
Learning Objectives
• Explain and give examples of an operational definition.
• Explain the four properties of measurement and how they are related to the four scales of measurement.
• Explain the difference between a discrete variable and a continuous variable.
An important step when designing a study is to define the variables in your study. A second important step is to determine the level of measurement of the dependent variable, which will ultimately help to determine which statistics are appropriate for analyzing the data collected.
Operationally Defining Variables
Some variables are fairly easy to define, manipulate, and measure. For example, if a researcher were studying the effects of exercise on blood pressure, she could manipulate the amount of exercise by varying the length of time that individuals exercised or by varying the intensity of the exercise (as by monitoring target heart rates). She could also measure blood pressure periodically during the course of the study; a machine already exists that will take this measure in a consistent and accurate manner. Does this mean that the measure will always be accurate? No. There is always the possibility for measurement error. In other words, the machine may not be functioning properly, or there may be human error contributing to the measurement error.
Now let’s suppose that a researcher wants to study a variable that is not as concrete or easily measured as blood pressure. For example, many people study abstract concepts such as aggression, attraction, depression, hunger, or anxiety. How would we either manipulate or measure any of these variables? My definition of what it means to be hungry may be quite different from yours. If I decided to measure hunger by simply asking participants in an experiment if they were hungry, the measure would not be accurate because each individual may define hunger in a different way. What we need is an operational definition of hunger—a definition of the variable in terms of the operations (activities) the researcher uses to measure or manipulate it.
operational definition A definition of a variable in terms of the operations (activities) a researcher uses to measure or manipulate it.
As this is a somewhat circular definition, let’s reword it in a way that may make more sense. An operational definition specifies the activities of the researcher in measuring and/or manipulating a variable (Kerlinger, 1986). In other words, we might define hunger in terms of specific activities, such as not having eaten for 12 hours. Thus, one operational definition of hunger could be that simple: Hunger occurs when 12 hours have passed with no food intake. Notice how much more concrete this definition is than simply saying hunger is that “gnawing feeling” that you get in your stomach. Specifying hunger in terms of the number of hours without food is an operational definition, whereas defining hunger as that “gnawing feeling” is not an operational definition.
In research, it is necessary to operationally define all variables—those measured (dependent variables) and those manipulated (independent variables). One reason for doing so is to ensure that the variables are measured consistently or manipulated in the same way during the course of the study. Another reason is to help us communicate our ideas to others. For example, what if a researcher said that she measured anxiety in her study? I would need to know how she defined anxiety operationally because it can be defined in many different ways. Thus, it can be measured in many different ways. For example, anxiety could be defined as the number of nervous actions displayed in a 1-hour time period, as a person’s score on a GSR (galvanic skin response) machine, as a person’s heart rate, or as a person’s score on the Taylor Manifest Anxiety Scale. Some measures are better than others—better meaning more consistent and valid. Once I understand how a researcher has defined a variable operationally, I can replicate the study if I desire. I can begin to have a better understanding of the study and whether or not it may have problems. I can also better design my study based on how the variables were operationally defined in other research studies.
Properties of Measurement
In addition to operationally defining independent and dependent variables, you must consider the level of measurement of the dependent variable. There are four levels of measurement, each based on the characteristics or properties of the data. These properties include identity, magnitude, equal unit size, and absolute zero. When a measure has the property of identity, objects that are different receive different scores. For example, if participants in a study had different political affiliations, they would receive different scores. Measurements have the property of magnitude (also called ordinality) when the ordering of the numbers reflects the ordering of the variable. In other words, numbers are assigned in order so that some numbers represent more or less of the variable being measured than others.
identity A property of measurement in which objects that are different receive different scores.
magnitude A property of measurement in which the ordering of numbers reflects the ordering of the variable.
Measurements have an equal unit size when a difference of 1 is the same amount throughout the entire scale. For example, the difference between people who are 64 inches tall and 65 inches tall is the same as the difference between people who are 72 inches tall and 73 inches tall. The difference in each situation (1 inch) is identical. Notice how this differs from the property of magnitude. Were we to simply line up and rank a group of individuals based on their height, the scale would have the properties of identity and magnitude, but not equal unit size. Can you think about why this would be so? We would not actually measure people’s height in inches, but simply order them in terms of how tall they appear, from shortest (the person receiving a score of 1) to tallest (the person receiving the highest score). Thus, our scale would not meet the criteria of equal unit size. In other words, the difference in height between the two people receiving scores of 1 and 2 might not be the same as the difference in height between the two people receiving scores of 3 and 4.
equal unit size A property of measurement in which a difference of 1 means the same amount throughout the entire scale.
Lastly, measures have an absolute zero when assigning a score of 0 indicates an absence of the variable being measured. For example, time spent studying would have the property of absolute zero because a score of 0 on this measure would mean an individual spent no time studying. However, a score of 0 is not always equal to the property of absolute zero. As an example, think about the Fahrenheit temperature scale. That measurement scale has a score of 0 (the thermometer can read 0 degrees), but does that score indicate an absence of temperature? No, it indicates a very cold temperature. Hence, it does not have the property of absolute zero.
absolute zero A property of measurement in which assigning a score of 0 indicates an absence of the variable being measured.
Scales (Levels) of Measurement
As noted previously, the level or scale of measurement depends on the properties of the data. There are four scales of measurement (nominal, ordinal, interval, and ratio), and each of these scales has one or more of the properties described in the previous section. We will discuss the scales in order, from the one with the fewest properties to the one with the most properties—that is, from least to most sophisticated. As we will see in later modules, it is important to establish the scale of measurement of your data in order to determine the appropriate statistical test to use when analyzing the data.
Nominal Scale A nominal scale is one in which objects or individuals are broken into categories that have no numerical properties. Nominal scales have the characteristic of identity but lack the other properties. Variables measured on a nominal scale are often referred to as categorical variables because the measuring scale involves dividing the data into categories. However, the categories carry no numerical weight. Some examples of categorical variables, or data measured on a nominal scale, include ethnicity, gender, and political affiliation.
nominal scale A scale in which objects or individuals are broken into categories that have no numerical properties.
We can assign numerical values to the levels of a nominal variable. For example, for ethnicity, we could label Asian Americans as 1, African Americans as 2, Latin Americans as 3, and so on. However, these scores do not carry any numerical weight; they are simply names for the categories. In other words, the scores are used for identity, but not for magnitude, equal unit size, or absolute value. We cannot order the data and claim that 1s are more than or less than 2s. We cannot analyze these data mathematically. It would not be appropriate, for example, to report that the mean ethnicity was 2.56. We cannot say that there is a true zero where someone would have no ethnicity. We can, however, form frequency distributions based on the data, calculate a mode, and use the chi-square test to analyze data measured on a nominal scale. If you are unfamiliar with these statistical concepts, don’t worry. They will be discussed in later modules.
Ordinal Scale An ordinal scale is one in which objects or individuals are categorized and the categories form a rank order along a continuum. Data measured on an ordinal scale have the properties of identity and magnitude but lack equal unit size and absolute zero. Ordinal data are often referred to as ranked data because the data are ordered from highest to lowest, or biggest to smallest. For example, reporting how students did on an exam based simply on their rank (highest score, second highest, and so on) would be an ordinal scale. This variable would carry identity and magnitude because each individual receives a rank (a number) that carries identity, and beyond simple identity it conveys information about order or magnitude (how many students performed better or worse in the class). However, the ranking score does not have equal unit size (the difference in performance on the exam between the students ranked 1 and 2 is not necessarily the same as the difference between the students ranked 2 and 3), or an absolute zero. We can calculate a mode or a median based on ordinal data; it is less meaningful to calculate a mean. We can also use nonparametric tests such as the Wilcoxon rank-sum test or a Spearman rank-order correlation coefficient (again, these statistical concepts will be explained in later modules).
ordinal scale A scale in which objects or individuals are categorized and the categories form a rank order along a continuum.
interval scale A scale in which the units of measurement (intervals) between the numbers on the scale are all equal in size.
Interval Scale An interval scale is one in which the units of measurement (intervals) between the numbers on the scale are all equal in size. When using an interval scale, the properties of identity, magnitude, and equal unit size are met. For example, the Fahrenheit temperature scale is an interval scale of measurement. A given temperature carries identity (days with different temperatures receive different scores on the scale), magnitude (cooler days receive lower scores and hotter days receive higher scores), and equal unit size (the difference between 50 and 51 degrees is the same as that between 90 and 91 degrees.) However, the Fahrenheit scale does not have an absolute zero. Because of this, we are not able to form ratios based on this scale (for example, 100 degrees is not twice as hot as 50 degrees). Because interval data can be added and subtracted, we can calculate the mean, median, or mode for interval data. We can also use t tests, ANOVAs, or Pearson product-moment correlation coefficients to analyze interval data (once again, these statistics will be discussed in later modules).
Ratio Scale A ratio scale is one in which, in addition to order and equal units of measurement, there is an absolute zero that indicates an absence of the variable being measured. Ratio data have all four properties of measurement—identity, magnitude, equal unit size, and absolute zero. Examples of ratio scales of measurement include weight, time, and height. Each of these scales has identity (individuals who weigh different amounts would receive different scores), magnitude (those who weigh less receive lower scores than those who weigh more), and equal unit size (1 pound is the same weight anywhere along the scale and for any person using the scale). These scales also have an absolute zero, which means a score of 0 reflects an absence of that variable. This also means that ratios can be formed. For example, a weight of 100 pounds is twice as much as a weight of 50 pounds. As with interval data, mathematical computations can be performed on ratio data. This means that the mean, median, and mode can be computed. In addition, as with interval data, t tests, ANOVAs, or the Pearson product-moment correlation can be computed.
ratio scale A scale in which, in addition to order and equal units of measurement, there is an absolute zero that indicates an absence of the variable being measured.
Notice that the same statistics are used for both interval and ratio scales. For this reason, many behavioral scientists simply refer to the category as interval-ratio data and typically do not distinguish between these two types of data. You should be familiar with the differences between interval and ratio data but aware that the same statistics are used with both types of data.
FEATURES OF SCALES OF MEASUREMENT
SCALE OF MEASUREMENTNominalOrdinalIntervalRatioExamplesEthnicity
Religion
SexClass rank
Letter gradeTemperature
(Fahrenheit and Celsius)
Many psychological testsWeight
Height
TimePropertiesIdentityIdentity
MagnitudeIdentity
Magnitude
Equal unit sizeIdentity
Magnitude
Equal unit size
Absolute zeroMathematical OperationsDetermine whether = or *Determine whether = or *
Determine
whether < or >Determine whether = or *
Determine whether < or >
Add
SubtractDetermine
whether = or *
Determine
whether < or >
Add
Subtract
Multiply
DivideTypical
Statistics UsedMode
Chi-squareMode
Median
Wilcoxon testsMode
Median
Mean
t test
ANOVAMode
Median
Mean
t test
ANOVA
a. Phone area code
b. Grade of egg (large, medium, small)
c. Amount of time spent studying
d. Score on the SAT
e. Class rank
f. Number on a volleyball jersey
g. Miles per gallon
Discrete and Continuous Variables
Another means of classifying variables is in terms of whether they are discrete or continuous in nature. Discrete variables usually consist of whole-number units or categories. They are made up of chunks or units that are detached and distinct from one another. A change in value occurs a whole unit at a time, and decimals do not make sense with discrete scales. Most nominal and ordinal data are discrete. For example, gender, political party, and ethnicity are discrete scales. Some interval or ratio data can be discrete. For example, the number of children someone has would be reported as a whole number (discrete data), yet it is also ratio data (you can have a true zero and form ratios).
discrete variables Variables that usually consist of whole-number units or categories and are made up of chunks or units that are detached and distinct from one another.
Continuous variables usually fall along a continuum and allow for fractional amounts. The term continuous means that it “continues” between the whole-number units. Examples of continuous variables are age (22.7 years), height (64.5 inches), and weight (113.25 pounds). Most interval and ratio data are continuous in nature.
continuous variables Variables that usually fall along a continuum and allow for fractional amounts.
REVIEW OF KEY TERMS
absolute zero (p. 15)
continuous variables (p. 18)
discrete variables (p. 18)
equal unit size (p. 14)
identity (p. 14)
interval scale (p. 16)
magnitude (p. 14)
nominal scale (p. 15)
operational definition (p. 14)
ordinal scale (p. 16)
ratio scale (p. 16)
Chapter2
In this chapter, and the next, we discuss what to do with the observations made when conducting a study—namely, how to describe the data set through the use of descriptive statistics. First, we consider ways of organizing the data. We need to take the large number of observations made during the course of a study and present them in a manner that is easier to read and understand. Then, we discuss some simple descriptive statistics. These statistics allow us to do some “number crunching”—to condense a large number of observations into a summary statistic or set of statistics. The concepts and statistics described in this section can be used to draw conclusions from data. They do not come close to covering all that can be done with data gathered from a study. They do, however, provide a place to start.
MODULE 3
Organizing Data
Learning Objectives
• Organize data in a frequency distribution.
• Organize data in a class interval frequency distribution.
• Graph data in a bar graph.
• Graph data in a histogram.
• Graph data in a frequency polygon.
We will discuss two methods of organizing data: frequency distributions and graphs.
Frequency Distributions
To illustrate the processes of organizing and describing data, let’s use the data set presented in Table 3.1. These data represent the scores of 30 students on an introductory psychology exam. One reason for organizing data and using statistics is so that meaningful conclusions can be drawn. As you can see from Table 3.1, our list of exam scores is simply that—a list in no particular order. As shown here, the data are not especially meaningful. One of the first steps in organizing these data might be to rearrange them from highest to lowest or lowest to highest.
Once this is accomplished (see Table 3.2), we can try to condense the data into a frequency distribution—a table in which all of the scores are listed along with the frequency with which each occurs. We can also show a relative frequency distribution, which indicates the proportion of the total observations included in each score. When the relative frequency distribution is multiplied by 100, it is read as a percentage. A frequency distribution and a relative frequency distribution of our exam data are presented in Table 3.3.
frequency distribution A table in which all of the scores are listed along with the frequency with which each occurs.
The frequency distribution is a way of presenting data that makes the pattern of the data easier to see. We can make the data set even easier to read (especially desirable with large data sets) if we group the scores and create a class interval frequency distribution. We can combine individual scores into categories, or intervals, and list them along with the frequency of scores in each interval. In our exam score example, the scores range from 45 to 95—a 50-point range. A rule of thumb when creating class intervals is to have between 10 and 20 categories (Hinkle, Wiersma, & Jurs, 1988). A quick method of calculating what the width of the interval should be is to subtract the smallest score from the largest score and then divide by the number of intervals you would like (Schweigert, 1994). If we wanted 10 intervals in our example, we would proceed as follows to determine the width of each interval:
95−4510=510=595−4510=510=5<math xmlns=”http://www.w3.org/1998/Math/MathML” display=”inline” alttext=”math”><mrow><mfrac><mrow><mn>95</mn><mo>−</mo><mn>45</mn></mrow><mrow><mn>10</mn></mrow></mfrac><mo>=</mo><mfrac><mn>5</mn><mrow><mn>10</mn></mrow></mfrac><mo>=</mo><mn>5</mn></mrow></math>
class interval frequency distribution A table in which the scores are grouped into intervals and listed along with the frequency of scores in each interval.
The frequency distribution using the class intervals with a width of 5 is provided in Table 3.4. Notice how much more compact the data appear when presented in a class interval frequency distribution. Although such distributions have the advantage of reducing the number of categories, they have the disadvantage of not providing as much information as a regular frequency distribution. For example, although we can see from the class interval frequency distribution that five people scored between 75 and 79, we do not know their exact scores within the interval.
Graphing Data
Frequency distributions can provide valuable information, but sometimes a picture is of greater value. Several types of pictorial representations can be used to represent data. The choice depends on the type of data collected and what the researcher hopes to emphasize or illustrate. The most common graphs used by psychologists are bar graphs, histograms, and frequency polygons (line graphs). Graphs typically have two coordinate axes, the x-axis (the horizontal axis) and the y-axis (the vertical axis). Most commonly, the y-axis is shorter than the x-axis, typically 60% to 75% of the length of the x-axis.
Bar Graphs and Histograms
Bar graphs and histograms are frequently confused. When the data collected are on a nominal scale, or if the variable is a qualitative variable (a categorical variable for which each value represents a discrete category), then a bar graph is most appropriate. A bar graph is a graphical representation of a frequency distribution in which vertical bars are centered above each category along the x-axis and are separated from each other by a space, indicating that the levels of the variable represent distinct, unrelated categories.
qualitative variable A categorical variable for which each value represents a discrete category.
bar graph A graphical representation of a frequency distribution in which vertical bars are centered above each category along the x-axis and are separated from each other by a space, indicating that the levels of the variable represent distinct, unrelated categories.
If the variable is a quantitative variable (the scores represent a change in quantity), or if the data collected are ordinal, interval, or ratio in scale, then a histogram can be used. A histogram is also a graphical representation of a frequency distribution in which vertical bars are centered above scores on the x-axis, but in a histogram the bars touch each other to indicate that the scores on the variable represent related, increasing values.
quantitative variable A variable for which the scores represent a change in quantity.
histogram A graphical representation of a frequency distribution in which vertical bars centered above scores on the x-axis touch each other to indicate that the scores on the variable represent related, increasing values.
In both a bar graph and a histogram, the height of each bar indicates the frequency for that level of the variable on the x-axis. The spaces between the bars on the bar graph indicate not only the qualitative differences among the categories but also that the order of the values of the variable on the x-axis is arbitrary. In other words, the categories on the x-axis in a bar graph can be placed in any order. The fact that the bars are contiguous in a histogram indicates not only the increasing quantity of the variable but also that the variable has a definite order that cannot be changed.
A bar graph is illustrated in Figure 3.1. For a hypothetical distribution, the frequencies of individuals who affiliate with various political parties are indicated. Notice that the different political parties are listed on the x-axis, whereas frequency is recorded on the y-axis. Although the political parties are presented in a certain order, this order could be rearranged because the variable is qualitative.
Figure 3.2 illustrates a histogram. In this figure, the frequencies of intelligence test scores from a hypothetical distribution are indicated. A histogram is appropriate because the IQ score variable is quantitative. The variable has a specific order that cannot be rearranged. You can see how to use Excel and SPSS to create both bar graphs and histograms in the Statistical Software Resources section at the end of this chapter. If you are unfamiliar with Excel or SPSS, see Appendix C to get started with these tools.
Frequency Polygons (Line Graphs)
We can also depict the data in a histogram as a frequency polygon—a line graph of the frequencies of individual scores or intervals. Again, scores (or intervals) are shown on the x-axis and frequencies on the y-axis. Once all the frequencies are plotted, the data points are connected. You can see the frequency polygon for the intelligence score data in Figure 3.3.
frequency polygon A line graph of the frequencies of individual scores.
Frequency polygons are appropriate when the variable is quantitative or the data are ordinal, interval, or ratio. In this respect, frequency polygons are similar to histograms. Frequency polygons are especially useful for continuous data (such as age, weight, or time) in which it is theoretically possible for values to fall anywhere along the continuum. For example, an individual can weigh 120.5 pounds or be 35.5 years of age. Histograms are more appropriate when the data are discrete (measured in whole units)—for example, number of college classes taken or number of siblings. You can see how to use Excel and SPSS to create frequency polygons in the Statistical Software Resources section at the end of this chapter. If you are unfamiliar with Excel or SPSS, see Appendix C to get started with these tools.
DATA ORGANIZATION
TYPE OF ORGANIZATIONAL TOOLFrequency DistributionBar GraphHistogramFrequency PolygonDescriptionA list of all scores occurring in the distribution along with the frequency of eachA pictorial graph with bars representing the frequency of occurrence of items for qualitative variablesA pictorial graph with bars representing the frequency of occurrence of items for quantitative variablesA pictorial line graph representing the frequency of occurrence of items for quantitative variablesUse withNominal, ordinal, interval, or ratio dataNominal dataTypically ordinal, interval, or ratio data—most appropriate for discrete dataTypically ordinal, interval, or ratio data—more appropriate for continuous data
REVIEW OF KEY TERMS
bar graph (p. 29)
class interval frequency distribution (p. 27)
frequency distribution (p. 26)
frequency polygon (p. 30)
histogram (p. 29)
qualitative variable (p. 29)
quantitative variable (p. 29)
MODULE EXERCISES
(Answers to odd-numbered questions appear in Appendix B.)
Exercises 1–3: The following data represent a distribution of speeds at which individuals were traveling on a highway.
6464766765686770676580707972736565626864
CRITICAL THINKING CHECK ANSWERS
Critical Thinking Check 3.1
MODULE 4
Measures of Central Tendency
Learning Objectives
• Differentiate measures of central tendency.
• Know how to calculate the mean, median, and mode.
• Know when it is most appropriate to use each measure of central tendency.
Organizing data into tables and graphs can help make a data set more meaningful. These methods, however, do not provide as much information as numerical measures. Descriptive statistics are numerical measures that describe a distribution by providing information on the central tendency of the distribution, the width of the distribution, and the distribution’s shape. A measure of central tendency characterizes an entire set of data in terms of a single representative number. Measures of central tendency measure the “middleness” of a distribution of scores in three ways: the mean, median, and mode.
descriptive statistics Numerical measures that describe a distribution by providing information on the central tendency of the distribution, the width of the distribution, and the shape of the distribution.
measure of central tendency A number intended to characterize an entire distribution.
Mean
The most commonly used measure of central tendency is the mean—the arithmetic average of a group of scores. You are probably familiar with this idea. We can calculate the mean for our distribution of exam scores (from the previous module) by adding all of the scores together and dividing by the total number of scores. Mathematically, this would be:
mean A measure of central tendency; the arithmetic average of a distribution.
μ=∑XNμ=∑XN<math xmlns=”http://www.w3.org/1998/Math/MathML” display=”inline” alttext=”math”><mrow><mi>μ</mi><mo>=</mo><mfrac><mrow><mstyle displaystyle=”true”><mo>∑</mo> <mi>X</mi></mstyle></mrow><mi>N</mi></mfrac></mrow></math>
where
μ (pronounced “mu”) represents the symbol for the population mean
Σ represents the symbol for “the sum of”
X represents the individual scores, and
N represents the number of scores in the distribution
To calculate the mean, then, we sum all of the Xs, or scores, and divide by the total number of scores in the distribution (N). You may have also seen this formula represented as follows:
¯¯¯X=∑XNX¯=∑XN<math xmlns=”http://www.w3.org/1998/Math/MathML” display=”inline” alttext=”math”><mrow><mover accent=”true”><mi>X</mi><mo>¯</mo></mover><mo>=</mo><mfrac><mrow><mstyle displaystyle=”true”><mo>∑</mo> <mi>X</mi></mstyle></mrow><mi>N</mi></mfrac></mrow></math>
In this case X represents a sample mean.
We can use either formula (they are the same) to calculate the mean for the distribution of exam scores used in Module 3. These scores are presented again in Table 4.1, along with a column showing frequency (f) and another column showing the frequency of the score multiplied by the score (f times X). The sum of all the values in the fX column is the sum of all the individual scores (ΣX). Using this sum in the formula for the mean, we have:
μ=∑XN=2,22030=74.00μ=∑XN=2,22030=74.00<math xmlns=”http://www.w3.org/1998/Math/MathML” display=”inline” alttext=”math”><mrow><mi>μ</mi><mo>=</mo><mfrac><mrow><mstyle displaystyle=”true”><mo>∑</mo> <mi>X</mi></mstyle></mrow><mi>N</mi></mfrac><mo>=</mo><mfrac><mrow><mn>2</mn><mo>,</mo><mn>220</mn></mrow><mrow><mn>30</mn></mrow></mfrac><mo>=</mo><mn>74.00</mn></mrow></math>
You can also calculate the mean using Excel, SPSS, or the Stats function on most calculators. As an example, the procedure for calculating the mean using each of these tools is presented in the Statistical Software Resources section at the end of this chapter. If you are unfamiliar with Excel or SPSS, see Appendix C to get started with these tools. Use of the mean is constrained by the nature of the data. It is appropriate for interval and ratio data, but it is not appropriate for ordinal or nominal data.
Median
Another measure of central tendency, the median, is used in situations in which the mean might not be representative of a distribution. Let’s use a different distribution of scores to demonstrate when it might be appropriate to use the median rather than the mean. Imagine that you are considering taking a job with a small computer company. When you interview for the position, the owner of the company informs you that the mean income for employees at the company is approximately $100,000 and that the company has 25 employees. Most people would view this as good news. Having learned in a statistics class that the mean might be influenced by extreme scores, you ask to see the distribution of 25 incomes. The distribution is shown in Table 4.2.
The calculation of the mean for this distribution is:
∑XN=2,498,00025=99,920∑XN=2,498,00025=99,920<math xmlns=”http://www.w3.org/1998/Math/MathML” display=”inline” alttext=”math”><mrow><mfrac><mrow><mstyle displaystyle=”true”><mo>∑</mo> <mi>X</mi></mstyle></mrow><mi>N</mi></mfrac><mo>=</mo><mfrac><mrow><mn>2</mn><mo>,</mo><mn>498</mn><mo>,</mo><mn>000</mn></mrow><mrow><mn>25</mn></mrow></mfrac><mo>=</mo><mn>99</mn><mo>,</mo><mn>920</mn></mrow></math>
Notice that, as claimed, the mean income of company employees is very close to $100,000. Notice also, however, that the mean in this case is not very representative of central tendency, or “middleness.” In this distribution, the mean is thrown off center or inflated by one very extreme score of $1,800,000 (the income of the company’s owner, needless to say). This extremely high income pulls the mean toward it and thus increases or inflates the mean. Thus, in distributions with one or a few extreme scores (either high or low), the mean will not be a good indicator of central tendency. In such cases, a better measure of central tendency is the median.
The median is the middle score in a distribution after the scores have been arranged from highest to lowest or lowest to highest. The distribution of incomes in Table 4.2 is already ordered from lowest to highest. To determine the median, we simply have to find the middle score. In this situation, with 25 scores, that would be the 13th score. You can see that the median of the distribution would be an income of $27,000, which is far more representative of the central tendency for this distribution of incomes.
median A measure of central tendency; the middle score in a distribution after the scores have been arranged from highest to lowest or lowest to highest.
Why is the median not as influenced as the mean by extreme scores? Think about the calculation of each of these measures. When calculating the mean, we must add in the atypical income of $1,800,000, thus distorting the calculation. When determining the median, however, we do not consider the size of the $1,800,000 income; it is only a score at one end of the distribution whose numerical value does not have to be considered in order to locate the middle score in the distribution. The point to remember is that the median is not affected by extreme scores in a distribution because it is only a positional value. The mean is affected because its value is determined by a calculation that has to include the extreme value.
In the income example, the distribution had an odd number of scores (N = 25). Thus, the median was an actual score in the distribution (the 13th score). In distributions with an even number of observations, the median is calculated by averaging the two middle scores. In other words, we determine the middle point between the two middle scores. Look back at the distribution of exam scores in Table 4.1. This distribution has 30 scores. The median would be the average of the 15th and 16th scores (the two middle scores). Thus, the median would be 75.5—not an actual score in the distribution, but the middle point nonetheless. Notice that in this distribution, the median (75.5) is very close to the mean (74.00). Why are they so similar? Because this distribution contains no extreme scores, both the mean and the median are representative of the central tendency of the distribution.
Like the mean, the median can be used with ratio and interval data and is inappropriate for use with nominal data, but unlike the mean, the median can be used with most ordinal data.
Mode
The third measure of central tendency is the mode—the score in a distribution that occurs with the greatest frequency. In the distribution of exam scores, the mode is 74 (similar to the mean and median). In the distribution of incomes, the mode is $25,000 (similar to the median, but not the mean). In some distributions, all scores occur with equal frequency; such a distribution has no mode. In other distributions, several scores occur with equal frequency. Thus, a distribution may have two modes (bimodal), three modes (trimodal), or even more. The mode is the only indicator of central tendency that can be used with nominal data. Although it can also be used with ordinal, interval, or ratio data, the mean and median are more reliable indicators of the central tendency of a distribution, and the mode is seldom used.
mode A measure of central tendency; the score in the distribution that occurs with the greatest frequency.
MEASURES OF CENTRAL TENDENCY
TYPE OF CENTRAL TENDENCY MEASUREMeanMedianModeDefinitionThe arithmetic averageThe middle score in a distribution of scores organized from highest to lowest or lowest to highestThe score occurring with greatest frequencyUse withInterval and ratio dataOrdinal, interval, and, ratio dataNominal, ordinal, interval, or ratio dataCautionNot for use with distributions with a few extreme scoresNot a reliable measure of central tendency
REVIEW OF KEY TERMS
descriptive statistics (p. 34)
mean (p. 34)
measure of central tendency (p. 34)
median (p. 36)
mode (p. 38)
MODULE EXERCISES
(Answers to odd-numbered questions appear in Appendix B.)
6473657665706568726764656767628068647970
Calculate the mean, median, and mode for the speed distribution data set.
Exercises 3–6: Calculate the mean, median, and mode for the following four distributions.
Distribution A: 10, 11, 11, 12, 12, 12, 13, 13, 14
Distribution B: 10, 11, 11, 12, 12, 12, 13, 13, 100
CRITICAL THINKING CHECK ANSWERS
Critical Thinking Check 4.1
CHAPTER TWO SUMMARY AND REVIEW
Descriptive Statistics I
CHAPTER SUMMARY
This chapter discussed data organization and descriptive statistics. Several methods of data organization were presented, including how to design a frequency distribution, a bar graph, a histogram, and a frequency polygon. The type of data appropriate for each of these methods was also discussed. One category of descriptive statistics that summarizes a large data set includes measures of central tendency (mean, median, and mode). These statistics provide information about the central tendency, or “middleness,” of a distribution of scores. The mean is the arithmetic average; the median is the middle score in a distribution of scores after the scores have been ordered from highest to lowest, or lowest to highest; and the mode is the score that occurs with the greatest frequency.
CHAPTER 2 REVIEW EXERCISES
(Answers to exercises appear in Appendix B.)
Fill-in Self-Test
Answer the following questions. If you have trouble answering any of the questions, restudy the relevant material before going on to the multiple-choice self-test.
Multiple-Choice Self-Test
Select the single best answer for each of the following questions. If you have trouble answering any of the questions, restudy the relevant material.
a. histogram
b. frequency polygon
c. bar graph
d. class interval histogram
a. categorical variable; numerical variable
b. numerical variable; categorical variable
c. bar graph; histogram
d. categorical variable and bar graph; numerical variable and histogram
a. equal to; equal to
b. greater than; equal to
c. equal to; less than
d. greater than; greater than
a. mean
b. standard deviation
c. median
d. either the mean or the median
a. 0; 0
b. $100,000; 0
c. 0; $100,000
d. $100,000; $100,000
a. mean; median
b. median; mode
c. mean; mode
d. mode; median
a. ordinal, interval, and ratio data only; nominal data only
b. nominal data only; ordinal data only
c. interval and ratio data only; all types of data
d. None of the above
Self-Test Problems
1, 1, 2, 2, 4, 5, 8, 9, 10, 11, 11, 11
CHAPTER TWO
Statistical Software Resources
If you need help getting started with Excel or SPSS, please see Appendix C: Getting Started with Excel and SPSS. The procedures outlined in all of the Statistical Software Resources sections will work with Excel 2007, 2010, and 2013; with SPSS 18-22; and on the TI-83 and TI-84, regular and plus versions.
MODULE 3 Organizing Data
Using Excel to Create a Bar Graph
Begin by entering the data from Figure 3.1 in Module 3 into an Excel spreadsheet, as follows. Please note that the column headings of “Affiliation” and “Frequency” are entered into the spreadsheet. Once the data are entered, highlight all of the data including the column headers.
Now select the Insert ribbon and then Column (in Excel 2013, this is found in the Charts menu, also please note that there is a Bar option for figures but that this produces horizontal bars, whereas the bars in a bar graph should be vertical). Select the top left option from the Column options (2-D column chart). This should produce the following bar graph:
Notice that the different political parties are listed on the x-axis, whereas frequency is recorded on the y-axis. Excel provides Chart Tools so that we can modify the appearance of a graph. For example, if you want the bar graph to conform to APA style, you could use Chart Tools to modify the appearance of the chart. To use Chart Tools, make sure that you have clicked on the chart in Excel after which the three ribbons (two in Excel 2013—Design and Format) under Chart Tools (Design, Layout, and Format) will become accessible. Using these menus, you can change the appearance of the chart to, for example, add Axis Titles (under the Layout ribbon in 2007 and 2010 or under the Design ribbon by using the Add Chart Element menu in 2013), remove the horizontal Gridlines (under the Layout ribbon, again using the Add Chart Element menu in 2013), or change the color of the bars (Excel uses blue as the default) by using the Format ribbon, clicking one of the bars, and selecting Shape Fill. After making these modifications, your chart will appear as follows:
Please also note that although the political parties are presented in a certain order, this order could be rearranged because the variable is qualitative.
Using Excel to Create a Histogram
To illustrate the difference between a bar graph and a histogram, let’s use the data from the table below, which lists the frequencies of intelligence test scores from a hypothetical distribution of 30 individuals. A histogram is appropriate for these data because the IQ score variable is quantitative. The variable has a specific order that cannot be rearranged.
Begin by entering the data into an Excel spreadsheet, as follows. Please note that the column headings of “Score” and “Frequency” are entered into the spreadsheet. Once the data are entered, highlight only the “Frequency” data as is illustrated in the next screen capture.
Because Excel does not have a histogram option in which the bars in the graph touch, we’ll have to use special formatting to create the histogram. Click on the Insert ribbon and then Column (in Excel 2013, this is in the Charts menu). Select the option in the top left corner, as we did when creating bar graphs. This should produce the following graph:
We’ll begin editing the graph by removing the spaces between the bars. To do so, right-click on any of the bars and select Format Data Series to produce the following pop-up window (in 2013 this window will appear on the right side of the screen):
Move the Gap Width tab to zero as is indicated in the window and then close the window. Your figure should now more closely resemble a histogram. Now you can use the Chart Tools to modify your figure so that it more closely resembles what you desire. This should include axis labels on the x- and y-axes and changing the values on the x-axis to reflect the range of intelligence scores that were measured. To accomplish the latter, right-click on a value on the x-axis and choose Select Data… to produce the following pop-up window:
Click on the Edit window under Horizontal (Category) Axis Labels. You’ll receive the following pop-up window:
Highlight the IQ scores from the spreadsheet and they will be inserted into the Axis label range: box. Then click OK. Click OK a second time to close the original pop-up window. You can now use the Chart Tools to format your histogram so that it more closely resembles a graph appropriate for APA style. This would include adding axis labels to the x– and y-axes, changing the bars from blue to black, and removing the gridlines from the graph. After making these changes, your figure should look as follows:
Using Excel to Create a Frequency Polygon (Line Graph)
Begin by entering the intelligence test score data from the table presented in the previous example into an Excel spreadsheet, as follows. Then, highlight only the Frequency data as is illustrated in the next screen capture.
Next, click on the Insert ribbon and then Line (in 2013 this is in the Charts menu). Select the option in the top left corner (the first 2-D line option). This should produce the following graph:
Now, right-click on a value on the x-axis and then click Select Data… to produce the following pop-up window:
Click on the Edit window under Horizontal (Category) Axis Labels. You’ll receive the following pop-up window:
Highlight the IQ scores from the spreadsheet and they will be inserted into the Axis label range: box. Then click OK. Click OK a second time to close the original pop-up window. You can now use the Chart Tools to format your frequency polygon so that it more closely resembles line graphs appropriate for APA style. This would include adding axis labels to the x- and y-axes, changing the line from blue to black, and removing the grid-lines from the graph. After making these changes, your figure should look as follows:
Using SPSS to Create a Bar Graph
We’ll use the same data as in the earlier example (Figure 3.1 in Module 3) to illustrate how to use SPSS to create a bar graph. To begin, we enter the data into the SPSS spreadsheet. As with Excel we use two columns, one labeled Affiliation and one labeled Frequency, as can be seen in the following screen capture.
Next, we click on the Graphs menu and then Chart Builder. From the Gallery menu on the bottom of the dialog box select Bar and then double-click the first bar graph icon in the top row to produce the following dialog box.
You can see that the two variables are listed in the top left Variables box. Drag the Affiliation variable to the x-axis box in the figure on the top right, and then drag the Frequency variable to the y-axis box in the figure on the top right. The dialog box should now look as follows:
Next, click on the Element Properties… box on the right-hand side of the dialog box to receive the following dialog box.
Click OK in the original dialog box. SPSS will then produce an output file with the following bar graph.
Using SPSS to Create a Histogram
Let’s use the same data set as in the Excel histogram example to create a histogram with SPSS. Thus, we’ll enter the IQ score data into the SPSS spreadsheet. However, in this case, each individual score is entered. This is illustrated in the screen capture below in which all 30 scores have been entered into SPSS. (Please note that due to screen size constraints, the final four scores do not show in the screen capture. Thus, make sure you use the IQ data from the earlier table that we used when creating a histogram using Excel.)
The variable was named IQscore using the Variable View screen and it was designated a Numeric variable with the Scale level of measurement. To name the variable, click on the Variable View tab at the bottom of the window and type the name you wish to give the variable in the highlighted Name box. The variable name cannot have any spaces in it. Because these data represent intelligence score data, we’ll type in IQscore. Note also that the Type of data is Numeric. Once the variable is named, highlight the Data View tab on the bottom left of the screen in order to get back to the data spreadsheet. (See Appendix C: “Getting Started with Excel and SPSS,” if you are unfamiliar with naming variables.) From the Data View spreadsheet screen, select Graphs, and then Chart Builder… to receive the following dialog boxes.
Select Histogram and then double-click on the first example of a histogram. In the dialog box on the top left of the screen, click on IQscore and drag it to the x-axis box in the histogram on the right. Then, in the Element Properties box on the right highlight Bar1, as in the screen capture above and then click on Set Parameters to receive the following dialog box:
Make sure that Automatic is selected as the option in the first box. In the second box, select Custom and set the Number of intervals at 18 (the number of different IQ scores received by the 30 participants in the study). Then click Continue and then Apply. Finally, click OK in the dialog box on the left and you should receive the histogram in the output file.
Notice that the bars are touching, except for those instances in which there were missing scores.
Using SPSS to Create a Frequency Polygon (Line Graph)
We’ll once again use the intelligence test score data to illustrate how to create a frequency polygon using SPSS. Enter the data in the same manner we did when we created a histogram in SPSS. In other words, enter each individual score on a separate line in SPSS so that all 30 scores in the distribution are entered individually as we did earlier in the module when creating the histogram. Once the data are entered, named, and coded as numeric with the scale level of measurement, click on Graphs and then Chart Builder to receive the following dialog boxes:
Double-click on the Line graph option in the lower left of the screen, and then double-click on the first example of a line graph. Then drag the IQscore variable from the top left of the screen to x-axis box. In the Element Properties dialog box on the right of the screen, highlight Line 1 and then Histogram in the Statistic box. Click on the Set Parameters box to receive the following dialog box:
Select Automatic in the first box, and then Custom in the second box indicating that the number of intervals should be 52 (the total range of IQ scores for our group of 30 individuals). Click Continue and then Apply. Finally, click OK to execute the procedure. You should receive the following frequency polygon.
MODULE 4 Measures of Central Tendency
Using Excel to Calculate the Mean, Median, and Mode
To begin using Excel to conduct data analyses, the data must be entered into an Excel spreadsheet. This simply involves opening Excel and entering the data into the spreadsheet. You can see in the following spreadsheet that I have entered the exam grade data from Table 4.1 in Module 4 into an Excel spreadsheet.
Once the data have been entered, we use the Data Analysis tool to calculate descriptive statistics. This is accomplished by clicking on the Data tab or ribbon and then clicking the Data Analysis icon on the top right side of the window. Once the Data Analysis tab is active, a dialog box of options will appear (see next).
Select Descriptive Statistics as is indicated in the preceding box, and then click OK. This will lead to the following dialog box:
With the cursor in the Input Range box, highlight the data that you want analyzed from Column A in the Excel spreadsheet so that they appear in the input range. In addition, check the Summary statistics box. Once you have done this, click OK. The summary statistics will appear in a new Worksheet, as seen next.
As you can see, there are several descriptive statistics reported, including all three measures of central tendency (mean, median, and mode).
Using SPSS to Calculate the Mean
As with the Excel exercise above, we will once again be using the data from Table 4.1 in Module 4 to calculate descriptive statistics. We begin by entering the data from Table 4.1 into an SPSS spreadsheet. This simply involves opening SPSS and entering the data into the spreadsheet. You can see in the following spreadsheet that I have entered the exam grade data from Table 4.1 into an SPSS spreadsheet.
Notice that the variable is simply named VAR00001. To rename the variable to something appropriate for your data set, click on the Variable View tab on the bottom left of the screen. You will see the following window:
Type the name you wish to give the variable in the highlighted Name box. The variable name cannot have any spaces in it. Because these data represent exam grade data, we’ll type in Examgrade. Note also that the Type of data is Numeric. Once the variable is named, highlight the Data View tab on the bottom left of the screen in order to get back to the data spreadsheet. Once you’ve navigated back to the data spreadsheet, click on the Analyze tab at the top of the screen and a drop-down menu with various statistical analyses will appear. Select Descriptive Statistics and then Descriptive. The following dialog box will appear:
Examgrade will be highlighted, as above. Click on the arrow in the middle of the window and the Examgrade variable will be moved over to the Variables box. Then click on Options to receive the following dialog box:
You can see that the Mean, Standard Deviation, Minimum, and Maximum are all checked. However, you could select any of the descriptive statistics you want calculated. After making your selections, click Continue and then OK. The output will appear on a separate page as an Output file like the one below where you can see the minimum and maximum scores for this distribution along with the mean exam score of 74. Please note that if you had more than one set of data—for example, two classes of exam scores—they could each occupy one column in your SPSS spreadsheet and you could conduct analyses on both variables at the same time. In this situation, separate descriptive statistics would be calculated for each data set.
Using the TI-84 to Calculate the Mean
Follow the steps below to use your TI-84 calculator to calculate the mean for the data set from Table 4.1 in Module 4.
The statistics for the single variable on which you entered data will be presented on the calculator screen. The mean is presented on the first line of output as ¯¯¯XX¯<math xmlns=”http://www.w3.org/1998/Math/MathML” display=”inline” alttext=”math”><mover accent=”true”><mi>X</mi><mo>¯</mo></mover></math>.
Buy an essay in any subject you find difficult—we’ll have a specialist in it ready
Ask for help with your most urgent short tasks—we can complete them in 4 hours!
Get your paper revised for free if it doesn’t meet your instructions.
Contact us anytime if you need help with your essay
APA, MLA, Chicago—we can use any formatting style you need.
Get a paper that’s fully original and checked for plagiarism