Welcome to the video on quantitative data collection. Quantitative methods emphasize objective methods and numerical analysis. In this video, we will be discussing two different methods of quantitative data collection, surveys and administrative data. We will talk about the strengths and weaknesses of each approach. So what is a survey? A survey is a list of questions aimed at collecting specific data from individuals. Surveys can be a useful tool to use when you want to collect information that is needed from a large group of people collect basic information such as counts of characteristics or behaviors or finally when you want to collect information on a particular concept. Self-report questionnaires are the most widely-used method of measurement in many fields including Positive Psychology. They allow you to collect information from the stake holders you're interested in directly. Because they know themselves best, they are perhaps best equipped to answer questions about many positive psychology indicators, like their state of well-being for example. Surveys have many strengths. They can be used to collect data from large samples; For example, you can get basic, descriptive information from a large sample of respondents. Collect data anonymously, which is important if you're asking about sensitive information and are worried about the influence of the researcher on honest reporting of outcomes. Collect data quickly and inexpensively, though this depends on the scale of the survey that you're administering. Collect data in a way that is convenient for respondents. Surveys are typically shorter than other methods and can often be taken in the comfort of one's own home either on the computer, over the phone or on paper. Many standardized surveys or questionnaires exist in the field of Positive Psychology which is helpful for several reasons. First, if you choose to use a survey or a subset of questions that have already been developed, you reduce the time needed for survey design. Second, many existing surveys have already been thoroughly pre-tested, and are known to be reliable ways of collecting specific information. Third, using an existing survey instrument or questions allows for comparisons to other groups, and/or other time periods. So, for example, the PERMA model provides five core elements of psychological well-being and happiness. You don't need to develop a new set of measures on well being since this one has already been vetted and can be used to measure happiness. But, surveys also have weaknesses worth noting. These in more detail. First, certain people might be excluded or interpret questions differently. For example, if you want to administer a web survey, anyone who doesn't have e-mail or access to the internet can't participate. Surveys can also be challenging for some populations, regardless of the mode, such as children or people with low reading comprehension. This is an important consideration and a reason why mode selection, for example web or paper, is something that you should keep in mind. Second, getting the appropriate response rate can be difficult which can mean that your sample is not representative of the population you are trying to understand. Often times, and positive psychology research less than 50% of people might take your survey for example. It's not clear then how represented the people who actually responded to the survey are, which has we discuss in the first week is the limitation of external validity. That means that the information the study provides may only tell you something very specific about that one group of people who responded. Unfortunately, the people who don't respond are often the very people you care most about. So for example, people who are low in self-discipline have been shown to be much less likely to respond to surveys. So you're probably getting the most conscientious people responding, which may not translate to the population you are trying to help. Additionally, surveys rely on accurate self-reporting, and there are several different types of concerns with self-report data. First, the influence of context raises questions about reliability. So if I ask you how dependable you are, it may very depending on the context. And so you maybe more dependable in certain contexts. My husband will tell you that I'm really dependable at work, but I'm not always so dependable at home and getting things done around the house. So when you're designing a survey, consider how best to frame the context for people. A second concern with self for report data is reference group bias. So people may have different references or frames for what you mean by "dependable" for example. So I come from family f of highly dependable people so when I forget to do one thing I view myself as not very dependable. And that frame of reference is likely very different from others. As an example of this from positive psychology research, research has shown that self discipline ratings of students by teachers at low performing schools are often higher than those of students at high performing schools, demonstrating that the reference group of students really matters. Other studies have shown that as students learn more about self discipline, their self reported levels of self discipline actually go down because their reference group changes and they have a higher standard for themselves. So reference group bias is something that is really important to consider when reviewing positive psychology articles. Another concern is the halo effect which means that our judgement of one trait is polluted by another. So someone might be quite beautiful which might bias some people to also think that they are talented and smart and possess a whole host of other positive traits which is not always the case. In these cases above, people may not realize that they are not reporting data correctly. In some cases however, they may be intentionally answering the questions differently. At the extreme case would be faking, where people actually lie on a survey. However, there's a less extreme version called social desirability bias where people just stretch their results a bit. This is particularly the case when surveys are anonymous so people may not want to admit that they are low in self control. Or perhaps, they really don't understand themselves that well after all. As an example, research has shown that self-rated intelligence is only moderately related to IQ Test scores. Again, some of these challenges are unavoidable but there are some strategies you can use when designing surveys. Here are some tips if you are designing your own survey. Use questions that are easy to understand. This means insuring that your survey falls within the reading level of your target population. And insuring questions are clearly worded. Provide definitions when necessary. For example, you might be interested in how often your population drinks soda. The term soda means different things to different people. To some, it means soda water, or seltzer. To others, it means Coke and Pepsi, but not diet soda. The point is you should define what you mean by soda or any other words for which different people might have different interpretations or definitions. Provide a context or reference frames for answering questions. To use the example of how often someone drinks soda, the answer choices of never, rarely, sometimes and often might mean different things to different people. In a family where most people drink soda at every meal, a person who only drinks once a day might answer sometimes. Where someone in a family that never drinks soda might answer often, to describe the same frequency. In this case, you would want to provide more detail answer options such as once or twice a week, several times per day, once a day, or several times per day. It's also a good idea to start with more straightforward questions and move towards sensitive ones. This helps to build a rapport with respondents and makes them more comfortable once you get to more personal questions. This is an important reason also to ask demographic questions at the end of the survey. Consider the relative value of open-ended versus closed-ended questions based on what you're hoping to learn. Finally, to make the survey as easy and straightforward for the respondent, maintain a parallel structure for questions. It's not a good idea to jump around between different response scales, such as asking a few how often questions, then an agree or disagree question, then a satisfaction question, and then another how often question. Group similar questions together or group questions with the same response options together. The other type of quantitative data collection that we will talk about is administrative data. There are many different types of administration data. Ratings or observations by a third party could be considered a form of the administrative data like the teacher or parent ratings we discussed. Bio Data or Biographical Data collected on an individual is another form of administrative data. Assessments such as test scores or intelligence tests or other programmatic outcomes that are already been collected like income, employment or satisfaction are also administrative data. These types of data have a couple of key strengths. Often, this method uses data that already exists. So it can be time efficient and cost effective when compared to other forms of data collection. So in the case of test scores or information on income or employment, you may be able to easily access this data. Administrative data also allows for consistent tracking over time. In the case of employment data, you may be able to get data going back several years. Administrative data is less likely to be biased by challenges with self-reportings such as social desirability bias or faking. Finally, using administrative data does not disrupt a program or burden respondents. Sometimes these administrative measures are intended to address the shortcomings of survey data. So take the example of the biodata measure from the True Grit study we've been reviewing. We created a seven point scale for assessing teachers' grit from 0-6 to measure sustained perseverance and passion in college activities. And we did this by adapting a rubric previously used to quantify follow-through in high school seniors. You received one point for college activities or work experience in which work participation lasted for at least two years. You received an additional point for activities in which a moderate level of achievement have been attained. And received two additional points for activities in which a high level of achievement have been attained. Moderate achievement were defined as leadership positions or awards within an activity, though not the highest form of either. For example, the secretary of an organization or assistant manager of a restaurant. High achievements were reserved for those individuals running organizations or reaching the highest honor within an activity or work experience. For example, the President or MVP of a team. Thus, for any given activity, you could receive from zero to three points and then your final grit score was the sum of these two scores for up to a total of six. We gathered all of this information from individual’s resumes. Now, this measure was much harder to fake than a self-report measure might be. But we do have to still ask questions about construct validity. Were we actually capturing grit? Or some other measure like leadership. And we had to ensure that raters were normed and had high inter-rater reliability so that they looked at things like moderate and high achievement or advancement in the same way. So while it did address some shortcomings, this measure and other administrative measures present other challenges. The bottom line is that there is no perfect measure. Every measure has error. Some of that error is random or not controllable, and some of that error is systematic or biased by the way the measure was designed or administered. So if you are collecting data, you should employ what's called the principle of aggregation. Using multiple measures over time to reach conclusions so that the random errors cancelled out and you're more confident in your results. And when you're interpreting studies, you should ask questions about the validity and reliability of survey in administrative data and ask yourself what this means about the validity of the study's conclusions. That brings us to the close of this video on quantitative data collection. In the next video, we'll discuss several different approaches to qualitative data collection.