Chapter+Four+-+Data+Collection

=Chapter Four: Data Collection=

The purpose of this chapter is to focus on the various facets of collecting data. Tests and measures, questionnaires and interview protocols, observational methods that are and are not obtrusive, as well as the validity and reliability of these approaches should be addressed. In addition, the relationship of the data collection method and research method should be noted, since not all data collection methods are appropriate for all research designs. Finally, issues related to the data collectors, types of collection methods and recording of data should be discussed.

1. Quantitative Data
Quantitative data is numerical data. The most common form of quantitative research to retrieve numerical data is through a test which can measure attitudes, personality, self-perceptions, aptitude, and performance. Standardized tests are the most common form of quantitative research collection. Quantitative data collection can also occur in a variety of structured formats including questionnaires, interviews, focus groups, and observations. These have to be structured differently so that the data collected is quantitative for the purpose of the research.

1.1.1 Validity
One of the most important psychometric properties to consider in using a test or assessment procedure is validity. A study has validity if the researcher's inferences and interpretations of the data are accurate. In order to show this, the researcher must have good rationales for the interpretations. The research must be well-designed to control for extraneous variables. The researcher should investigate any other possible reasons for the outcome besides his or her own conclusion. Involving peers to review the design and findings can be very helpful, as others may spot holes in the methodology and analysis that the researcher may have overlooked. Texas Education Agency assesses for validity of test items on the TAKS test by embedding field test questions into its annual TAKS test. These questions do not count against a student's score, but the results are tabulated as research for developing next year's test. In addition, each year certain classes at schools are chosen to field test an entire TAKS writing test for research purposes. This is a test on which students must complete an essay as well as answer objective questions about grammar and usage. Researchers analyze the data collected here to create future testing and analyze the questions for validity.

1.1.2 Reliability
In any kind of testing, reliability must be present in the scores. To achieve reliability, consistency and stability must be present in the set of test scores. If an exam is reliable, then the scores provided must be similar each time the test is given. Some ways of computing reliability are test-retest, equivalent-forms, and internal consistency. Interscorer reliability refers to consistent scores being given by multiple judges or assessors. Reliability of a study's data can be cross-checked when replication is instituted.

1.2 Developing an instrument
Defining instrument: a device for measuring the present value of a quantity under observation; one who, or that which, is made a means, or is caused to serve a purpose; a medium, means, or agent. If an already developed instrument exists for the research topic you are interested in, then by all means use it because validity and reliability information will most likely be provided. However, if an instrument does not exist that meets your research needs, then you will have to construct one that is appropriate.  Developing an instrument can be in the form of a pre-test/post-test format or by collecting data through observation. Both forms pretty much measure a collective group with no specifics (i.e. control/experimental setting). As researchers, we formulate different (quantitative, qualitative, or mixed approaches) variables in order to gather outcomes. From these outcomes, we then pinpoint single outcomes to generalize groups.  Other definitions according to Johnson & Christensen: Instrumentation: Any change that occurs in the way the dependent variable is measured (p.262); Instrument case study: interest is in understanding something more general than the particular case (p.408).

To use the evidence based on content when developing an instrument to use in a study, there are three points to consider:

(a) Understand what the test is supposed to measure. For an extreme example, if you are testing whether students who are taught test taking strategies will do better on a reading test than those who are just being taught content, it would not be appropriate to ask questions about math. The test maker must understand they are testing results on a reading test, and asking a math question would not prove or disprove the hypothesis one way or the other. (b) Examine the content on the specific test. After the test is constructed, go back and check it again to make sure the questions will provide the data that researcher is looking to collect. As in the example above, the math question would not prove if teaching test taking strategies will help students on reading tests, so that question must be deleted. (c) Decide whether the test adequately represents the content domain. Do the questions on the test ask questions about reading? If you spent 6 weeks teaching students about test-taking strategies and 80 percent of the test is easy enough that the students do not have to use them, then this is a poorly developed instrument to gather the data for which the researcher is looking.

1.3 Tests and Measures
Two types of test are usually used in data collection. The first are formative tests. These types of assessments are used to collect data, but usually are responsive to the information that is derived. The tests may be shaped over time by the participants responses to get a deeper picture into an investigative area. Usually formative testing is used with intent of gathering information for improvement. Rick Stiggins, a psychometrician who designed standardized /summative assessments, has become a strong proponent of formative assessments. He sees the role of the teacher as a coach in the formative assessment process; a coach who sees assessment as an opportunity to guide the learner to improvement and ultimate success. He sees the need for both kinds of assessment, but finds that a majority of educators lack a working understanding of the power gained through formative assessments (Stiggins, Rick, [Presenter].[2006]. //Classroom Assessment for Student Learning: Doing it Right-Using It Well [//DVD]. Princeton: NJ: Educational Testing Services.) The second type of testing is summative. These usually present quantitative snapshots of one point in time. Most state and standardized tests would be placed into this category. Robert Stakes succinctly captured the difference when he stated, "When the cook tastes the soup, that's formative; when the guests taste the soup, that's summative."

Formative assessment is commonly referred to as testing for learning and summative assessment as testing of learning. Testing for learning means the test is in place to help the student learn and to help the teacher to make instructional decisions to support the learning. Testing of learning is in place to test if a student has learned the required content.

1.4 Other methods
One method for collecting data for a quantitative study is to use existing data that was collected by another researcher for a different purpose. You can sometimes find studies that have already collected data for a topic you wish to study. An issue with this type of data is whether the participants are identifiable. Although you may not be coming in direct contact with your participants, an application to your Institutional Review Board (IRB) will need to be submitted so that they can make a ruling on whether anonymity is being honored. Normally you would want to get data sets with nothing identifying specific participants. Another issue is validity. If you are using data from another research study, you will need to be sure the data collection methods used were appropriate for the kind of conclusions you make. For example, if there were sampling errors you may have invalid findings in your research report.

2. Qualitative Data
Qualitative data describes a phenomenon or event. The term categorical data is also used interchangeably. To collect qualitative information, the researcher studies the experiences of individuals exposed to a certain situation or activity. Participants are asked general questions about a process or their acquaintances with a specific condition and their impression or reaction to it. Qualitative data deals more with the individual’s feelings, perceptions, behavior, and opinions regarding certain situation. That is why many times qualitative data has been referred to as being subjective or as having a somewhat intangible nature. Therefore the analysis and accurate interpretation of this type of data is essential to its credibility. Qualitative data can be obtained through different instruments such as interviews, observation rubrics, field notes, or the investigation of archival data. As Johnson and Christensen appoint qualitative research is a relatively new and increasingly popular area of research methodology. Due to the “unobservable” nature of this type of data researchers have developed better and more methodical ways to analyze it.

2.1 Interview
The interview, in which the researcher collects data by asking questions of a participant, is a valuable tool in a researcher’s tool box. Interviews can be done in person or over the phone depending on the type of interview used. The type of interview employed is determined by the context and focus of the research problem. There are three main types that can be used: Key Informant, Survey, or Focus Group. Trust and rapport need to be established and impartiality must be extended to all participants. Explaining the purpose and importance of the research and assuring the participants that their input is either anonymous or confidential helps establish trust. The strength of this data collection method is that clarifying or probing follow ups can elicit further on the spot information. Caution should be taken to prevent bias through tone of voice or body language.

Interviews can be used to collect both quantitative and qualitative data. Quantitative data can be collected using the closed quantitative interview. Qualitative data can be collected using either the standardized open-ended interview, the interview guide approach, or the informal conversational interview. Each interview style has its particular protocol and its embedded strengths and weaknesses. Qualitative data can also be collected via group interviews called focus groups. Focus groups are generally conducted with a specific group of individuals who have knowledge about the research topic. Groups usually consist of 7 to 10 people and the discussion is structured. A smaller group tends to make participants feel like they are not as anonymous as they would like to be, so they may be reluctant to say what is on their minds. A larger group can be too unmanageable for a moderator to give everyone a chance to participate fully. Another form of interviewing may include photo interviewing. This is when the researcher uses visual data to elicit additional information during the interviewing process. Interviewers also need to do research to prepare for the interview process. They need to know the subject under study so as to formulate questions that will get to the core of what they are researching. Gwen Ifell, the moderator for the Vice-Presidential debate on October 2, 2008, has over 20 years experience in the political arena. She is moderator and managing editor of "Washington Week" and senior correspondent for [|"The NewsHour with Jim Lehrer."]. Ifell explained on the //Oprah// show the day after the debate exactly how much research went in to crafting unbiased questions for the two candidates, Palin and Biden. She read their biographies, all their published works, and any bills and government documentation that they had created. Then she brainstormed a list of many more questions than she needed, trying to find the ones that would be unbiased, fair, and neutral and would engage debate at the same time. Her favorite open ended question was, "What is your Achilles heel?" Per Ifell, her questions were unvetted by either party and were unknown to the candidates prior to the live debate.

Concerns had surfaced prior to the debate about her being biased toward the Obama ticket due to a book she's working on about the breakthrough of Black candidates onto the American political scene with an unwritten chapter about Obama. These concerns were unfounded. Her lack of bias was no accident and teaches a lesson about the work that must go into guarding against bias during any interview process.

I feel that Ms. Ifell as well as the other moderators did an outstanding job of not showing bias. I also think it was no accident that she, being the only African American moderator, conducted the Vice-Presidential debate instead of the Presidential debate. (Ms. Ifell also moderated the 2004 Vice-Presidential debate.) This was probably another attempt to avoid rumors or cause of bias.

As an addition to the topic of interview and because I thought it would be essential to include I would like to add information about the focus groups talked about by the authors. When the word interview is heard most people think about two individuals have a discussion one individual asking questions while the other answers them. But according to Johnson and Christensen a focus group is considered a form of interview. This form of interview is done as a group collaboration rather then just two individuals. The collaboration is done among a group of people with a moderator that would lead the group discussion with the purpose of finding out how the other individuals feel or think about a certain topic, maybe the topic that is being researched. The moderator would serve as the person to keep the others in line so the topic or focus of the group would not be lost into other discussions. The facilitator would collect the data or the words of the group participants. Focus groups can be used in many ways. According to Stewart and Shamdasani focus groups can be used in seven different ways. The following are the seven different ways discussed in the chapter.

1. Gathering background information about a topic of interest. 2. Creating research hypotheses that can later be used to begin research or testing material. 3. Coming up with new concepts or creative ideas. 4. Finding out if a problem exists with a new program, service or product. 5. Discussing impressions of products, services, or programs. 6. Learning how group participants talk about an interest. 7. Researching previous findings from a quantitative result.

When conducting a focus group the participants usually consist of 6 to 12 people that have been previously chosen to discuss a certain interest to the researcher. The groups are usually homogeneous to prevent any issues or problems dealt with a heterogeneous group. The person that is chosen as the group moderator or facilitator must have wonderful interpersonal skills and must know how to lead a group of people. The focus group can be used as an addition to another form of data collection but this way the information gathered is usually easy to understand and provides the researcher with in-depth information.

2.2 Observation
One form of data collection is observation of what people are actually doing. You can collect quantitative or qualitative data from observations. Quantitative data is counting or timing a behavior. For example, you might count how frequently a teacher calls on boys and on girls. Or which side of the room a teacher calls on more. Or you might time how long a teacher waits after asking a question before allowing someone to answer it. It is important to clearly define your variable and use an effective method with a consistent format for observation. Otherwise, your data may be inaccurate. Counting can be a powerful tool of observation. Think about a student who complains that, "You don't like me." If the teacher tallies how many times he/she gives negative responses to that particular student, a positive change in the teacher/student communication pattern can occur. Although a teacher probably wouldn't be able to do this on their own and instead would use another observer, or videotape the session. This tool can also be employed by students who don't realize, for example, how many times they blurt out. Just letting them keep a private, running tally of the problematic behavior can lead to self awareness, thus diminishing or extinguishing the behavior in question. In using observation to collect qualitative data, you may want to see what happens in a natural environment – for example, a classroom. You must define your role. You can be a //complete observer//, with no role in the environment of the participants. You can be an //observer-participant//, with no specific role but some interaction with the participants. You can be a //participant-observer//, with a specific role in the environment of the participants. Or you can be a //complete participant//, fully engaged in the environment with the participants. One issue with observation is that your presence alone will tend to change what is happening, so your observations will not be free of your own influence. An example of this phenomenon in the educational setting would be: if you went to a teacher’s room to observe the students, the students may behave differently than if you were not there. Another issue with observation in a natural environment is that you, as the researcher, must keep your objectivity. If you lose that, you have lost your ability to make valid observations. This can be extremely difficult if you are researching a situation in which the participants are in severe need, such as in a poorly run day care center. You may be tempted to intervene. No matter what kind of observations you make, you must have a system for documenting what you are collecting so you can interpret your results after the observations. Even if you're just going in to make a peer observation, you need to plan ahead what behavior you will target and how to note what you see. This is where the importance of the data collection instrument comes in to play a serious role.

2.3 Other methods
As we move further into the second millennium, it is important to acknowledge visual data collection as a tool for researchers. Visual data may include photographs, drawings, graphics, paintings, films and videos. Our society is becoming more multi-media oriented and therefore this may become more prevalent in the collection and presentation of data. For some people a picture presents a concrete piece of information that is more readily processed than the abstractions that may be associated with the verbal and numerical information gained from other types of collection. Also, visual data can be used in both qualitative and quantitative ways. An issue with multimedia methods is the privacy of the participants. The researcher must obtain permission to film, photograph, and/or audiotape the participants. Good security measures to safeguard the data must also be in place.

3. Data Collection: Threats to validity
Factors that can affect the validity of data collection are: ·  **Data Collectors:** The people who are collecting data can impact its validity through their characteristics and personal bias. Their characteristics are basically their physical appearance (race, gender, age, and ethnicity) but even include things like their preferences or language (dialect). The researcher’s personal bias or expectations may also affect what they see and don’t see when collecting data. ·  **The instrument:** The method used to collect data may be biased or flawed. An example would be using pre-test and post-test. Usually during a pretest subjects get an idea of what the researcher is studying and this may affect their responses on a posttest. Sometimes the overall instrument being used is just not very good, and cannot provide valid data. An example of a biased test would be one that uses content familiar to only certain demographic groups. Other groups may score poorly on the test but have good skills or knowledge going unnoticed by the testing instrument, therefore creating errors in the findings of the study. ·  **Analysis of problems:** Instrument decay or how an instrument is scored can be ubiquitous therefore affecting the data collected. A researcher may be trying to prove something and without thinking may only take note of behavior that supports his or her hypothesis. The researcher may also observe a behavior but interpret the reason for the behavior incorrectly.

3.1 Subject characteristics
A lot of emphasis is put on the characteristics of the sample group and various sampling techniques, but as important as determining the population, the subjects used in the study must be identified. Unlike the psychology experiments of past eras, there are now very strong ethical aspects followed when using human subjects in all types of psychological experiments. Long, detailed explanations and applications must be filed and approved to even conduct research using human subjects. Experimenters must have more than a clear vision of the research they want to study. They have to know the research problem and hypothesis to be addressed in the study and prove the importance or significance of the data which will be collected as a result of the study. The experimenter must describe the research study design, the measurement instruments used and the setting in which they will be used, and about how much time will be required of each subject. Each subject's participation must be voluntary and they must give informed consent. The age of the subjects must be shared and if the study includes children, there are even stricter regulations. The most important characteristic of research study subjects is whether they meet all the criteria of the experimenter and the study. They can not be in the study otherwise, or they will invalidate the results.

In many studies a convenience sample is used. In these cases, the validity of the findings can be questioned. However, as more researchers replicate the study with different sample groups, the findings, if confirmed, can be considered valid.

3.2 Data collector characteristics
One aspect of data collector characteristics involves the response of the data collector in an interview. The researcher should not be responding in a way that makes participants feel like they know what types of answers are being desired. Staying neutral during an interview is something that takes training and practice. Other aspects are physical appearance, etc. that are mentioned above under **Data Collection: Threats to Validity**. Generally, when the participants feel comfortable with the interviewer because of having a similar culture (race, age, language, class, etc.) there is a better chance of getting good data.

3.3 Other concerns
The treatment of research participants or subjects is one of the most important and fundamental issues that researchers must confront. Improper conditions during a research study could lead to serious physical and psychological harm. The overall theme to any research testing is how ethical such tests will be. There are several guidelines that must be followed in order for ethical research to be conducted using human subjects. These guidelines include:


 * Informed Consent- subjects must agree to participate after they are made aware of the study's components
 * Freedom to Withdraw- subjects are given the freedom to "pull out" of the study at anytime unless otherwise specified in the agreement to participate.
 * Protection from Mental and Physical Harm- subjects participating in studies must be treated in such a way as not to cause physical or mental harm.
 * Confidentiality- subjects have the right to remain anonymous and the data collected on them must be kept confidential.
 * Institutional Review Board- all human research is legally required to be reviewed by this board to determine if there are any ethical issues of concern.


 * 4. Summary**