Surveys

 

What are surveys?

Surveys are the quintessential quantitative methodology. They are used to collect quantitative information about items in a population. Survey research is a method of data collection in which a defined group of individuals are asked to answer a number of identical questions. Its aim is to measure attitudes, knowledge and behaviour. It is commonly used in social science, market research and in public health.

The components of a survey include selecting a sample of respondents and presenting the survey questions to the sample either in an interview or in a self-adminstered questionnaire. A survey can be descriptive (i.e. describes the nature of existing conditions), explanatory (i.e. seeking to establish cause and effect relationships), theory building or instrument testing (i.e. testing a newly developed health status or quality of life instrument or used as a tool to measure a psychological condition). The unit of analysis is usually the individual though it can be an organsation (called cases).

Surveys can take the form of a census or be a sample. If it is a sampled survey, it would be one of the following:

  • Cross-sectional - a snapshot of the population at a particular point in time
  • Longitudinal Trend - a given general population is sampled and studied at different points in time
  • Longitudinal Cohort - this focuses on the same specific population each time data are collected, although the samples may be different
  • Longitudinal Panel - the same sample of respondents over time


Stages in Survey Research

  1. Survey design - what topic and population is to be studied and which form of survey will be used.
  2. Sampling
  3. Instrument design - will you be measuring attitudes? If so, will it involve index or scale construction?
  4. Evaluation of the survey - pretest and pilot
  5. Collection of data
  6. Collating and managing data
  7. Analysis of survey data


Key Terms and Concepts

Survey Instrument

Schedule of questions or response items

Response items

Individual survey questions or statements for which a response is solicited.

Interviews

In context of surveys, this refers to face-to-face administration of a survey instrument. Here the researcher reads out the questions and ticks the answers on behalf of the respondent. Researcher cannot and does not engage in conversation as this may bias the collection of data. If there are open ended questions, the questions are just asked with no follow up for clarification, unlike in qualitative interviews. Such questions are coded post-hoc.

Questionnaires

This refers to post, email or other indirect methods of administration.


Initial Steps

  1. Pick your topic - what theory or concept are you testing?
  2. Decide on your population, then select your sampling frame and then your sampling method.
  3. Sampling is so important (and often inexperienced researchers overlook this and spend all their time on designing the survey instrument). Your sample MUST BE REPRESENTATIVE of the population you are surveying.


Sampling

There are two types of sampling: Random (Probability) and Non-random (Purposive).

Random sampling is used in descriptive and explanatory studies (i.e. testing of hypotheses). Examples of random sampling are simple, systematic, stratification and cluster sampling.

Non-random sampling is used in theory and instrument development. Examples are quota or volunteer sampling.

There is also a sampling strategy for small-scale studies or for groups that are hard to reach. This is snow-balling or network sampling. This is used when there is no adequate list for a sampling frame. Nomimated individuals nominate others. It is feasible in surveys on illegal activities but it can only be used for members of a network who share the characteristics of interest.

Sources for sampling include electoral register, postal register or multiple list sampling (e.g. members of the Multiple Sclerosis Societ. For telephone surveys, you can use the telephone directory, random-digit dialling. However, make sure you select a respondent from the household (e.g. the person who last had a birthday in the household). It shouldn't be the first one who answers the phone.

How big should my sample be?

The principle of sample sizes is the smaller the population, the bigger the sampling ratio has to be for an accurate sample. Larger populations allow for smaller sampling ratios for equally good samples. As the population size grows, the returns in accruacy for sample size shrink.

Best sample size depends on degree of accuracy required, degree of diversity in the population and the number of different variables being examined simultaneously in the data analysis.

There are two ways to determine sample size: 

(1) calculation of sample size by using statistical equations or 

(2) use of 'rule of thumb', i.e. samples based on previous experience that have met the requirements of the statistical method (see Neuman 1997 below):


For populations under 1000,  you need to sample 30% of the population.

For populations of 10,000, you will need 10%.

For populations of over 150,000 you require 1%.  

For very large populations of over 10 million, you only need 0.025%.  Size of population ceases to be relevant once the sampling ratio is very small so samples of 2,300 are as accurate are as accurate for 200 million as it is for 10 million.


Sub-group analysis will affect sample size so the rule of thumb is to have at least 50 cases for each subgroup being analyzed. For example, I want to analyze four variables for males between 30 and 40 years old (this could only be 10% of the population) so 10 X 50 =500 cases needed for this sub-group analysis.


Types of Surveys

  1. Postal Survey
  2. Telephone Survey
  3. Face-to-Face Interview
  4. Email or internet survey
  5. Computer-assisted telephone interviewing (CATI)
  6. Computer surveys (where participants complete their questionnaire on a computer rather than on paper)

Note: If you're doing a postal survey, make sure you include a cover letter and SAE (self addressed envelope). Don't use metered postage as it dates and the post office may refuse to deliver it. Also people are more likely to return it if you spent money on a stamp! Do send out reminders - usually two reminders (the first, 2 to 3 weeks after the questionnaire and the second 2 to 3 weeks after that). The first reminder is just a reminder but you can resend the questionnaire with the second reminder (assume that the questionnaire was discarded). You could do more but generally two reminders is enough. You're also more likely to get responses immediately and then they taper off. By 6 weeks you probably have all the responses you'll get so it is not very cost-effective or timely to continue collecting data.


When doing face-to-face interviews or telephone interviews, be clear and friendly in telling the instructions. Face-to-face can be done on streets, focus groups (closed interview schedule) and is also used for the census.


When doing internet surveys, it can be very difficult sampling as the population size may be uncertain.  It is possible to find out numbers registered to a certain FaceBook page or group and survey and so you will be able to calculate response rates.  However,despite being able to calculate reach on social media, the FaceBook users may not be representative of the population you are trying to survey and this has implications for the generalisability of your findings .Self-selection on the Internet can be completely unstructured, even a named forum (e.g. for junior doctors) will have members who fall outside the sampling frame.


What types of information does a survey cover?

1. Attributes - personal, socio-economic characteristics such as sex, age, marital status, religion and occupation.

2. Behaviour - what the individual has done, is doing and may possibly do in the future

3. Attitudes - imply evaluation and are concerned with how people feel about an issue. Questions about attitudes usually employ scales - i.e. a statement is made and individuals are asked to indicate their level of agreement in a positive or negative direction.


Instrument Design

1. After deciding your questions, you must combine them into a questionnaire.

2. A questionnaire is not a hazard collection of questions but is a carefully formulated sequence of questions.

3. The questionnaire is structured to obtain information that meets the requirements of your research project.

When laying out your questionnaire, remember filter questions (opening questions) and funnel questions (successive questions, each one is more specific than the last one) and linking questions (usually an open-ended question to engage general opinion about the topic being investigated).


General Rules of Questionnaire Construction

  1. Include only the questions that address your research concerns and which you plan to analyze.
  2. Make the questionnaire as appealing as possible.
  3. Keep the questionnaire as short as possible but do include all necessary questions to cover all aspects of the research problem.
  4. Consider in advance all possible issues that the respondent may raise.
  5. Minimize use of open-ended questions!
  6. Ensure anonymity and confidentiality. (This can be difficult to do if you're surveying a group of particular group of people with the aim to do follow up interviews and you need to track who has responded and who has yet to respond. You can use a separate postcard which is detachable from the questionnaire to account for response. Do not attempt to hide the code number. It's not ethical!)


Survey Instrument Order

1. A cover letter or introduction disclosing the purpose and sponsorship of the survey followed by the instructions.

2. Survey should follow with non-threatening items which arouse interest (can include filter questions and open-ended questions).

3. First question should be clearly related to the announced purposes of the survey (not a background item).

4. Opening questions are important as it encourages the respondent to complete the questionnaire. Non-threatening questions can include demographic - e.g. how many people over the age of 18 live in your house?

Note: as you move towards the end of the survey, the lower the response rate! So do not put your open-ended questions to the end (they won't get answered). Put them at the end or middle of sections. You can also change the order of the survey instrument with different respondents.

5. Survey can then proceed with attitudinal questions.

6. Group items into logical coherent sections, i.e. under specific topics.

7. Demographic information can be early on to avoid loss of information due to fatigue.

8. Sensitive background items (e.g. income) should be at the end. By then, you have gained their trust and are more likely to respond.


Item Bias - What to Avoid?

  1. Ambiguity - questions should be specific, avoid generalities, e.g. "On a scale of 1 to 10, how popular is Gordon Brown?" Popular with whom? Make sure there are no alternative meanings.
  2. Non-exhaustive response set - e.g. leaving out 'neutral' or 'don't know'
  3. Ranking lists and multidimensionality - e.g. "On a scale of 1 to 10, rank performance of your local Councillor? (How would you rank?)
  4. Loaded terms - e.g. "Do you lean more towards the pro-life or towards pro-abortion position on the issue of termination of late-term pregnancies when the health of the mother is threatened? (Use of pro-life rather than anti-abortion and pro-abortion rather than pro-choice is judgmental.)
  5. Leading questions - e.g. "Do you favour an increase in minimum wage to 10 pounds an hour?" (Of course they do!)
  6. Unfamiliar terms and jargon
  7. Compound items and complexity - an example of compound items is "do you have or have you ever had a physical, mental or other health condition that has limited the kind of work you can do?" (What would yes mean?) An example of complexity would be use of double negatives in the question.
  8. Social desirability and hypothetical questions.
  9. Avoid 'often, sometimes etc'.  This can mean different things to different people.  Be specific - say 'in the last 7 days' rather than 'in the last week'.  
  10. Note recall bias!  Keep time periods as short as possible to minimize problems of memory recall.  


Reliability

Reliability is the consistency of a measure of a concept.

Internal (Consistency) Reliability

This is the most commonly used method to test reliability. It is measured using the Cronbach's alpha statistic (for items with more than 2 response categories) and the Kuder-Richardson (KR-20) test (for items with 2 response categories, e.g. yes/no). Internal consistency involves testing for homogeneity which assumes that there are correlations betwen items on a scale that are not the result of random chance but reflect a real patterning as to how the questions are answered. If the Alpha statistic is < 0.5, then this is regarded as low internal reliability (i.e. the items are not measuring the same phenomonen).

Test-retest reliability

Does the measure produce the same or similiar results from the same respondents if administered at different points of time? Usually the questionnaire is adminstered on 2 occasions separated by a few days. Ideally, responses shouldn't vary but in health, it is possible that the health status can change inbetween.


Validity

Validity is concerned with whether or not the measurement of a concept really measures the concept.

  1. Face Validity - do the questions make sense and do they appear to be relevant?
  2. Content Validity - is the choice of and relative importance given to each question appropriate for the phenomenon being measured?
  3. Criterion Validity - does the measure produce results that correspond with a superior one (gold standard)?
  4. Construct Validity - do the results obtained confirm expected relationships or hypotheses?

Just think internal validity and external validity!


Quick Note on Indices and Scales

While nearly every social phenomenon can be measured, not all can be done directly (for instance, predisposition to commit adultery).  Indices and scales help us to condense and simplify social and behavioural information and can assess the quality of measurement too.  For most purposes, scales and indices are treated as being interchangeable. They should be unidimensional - i.e. all items should fit together to measure a single construct.  However there is a difference. 


An index is constructed through the simple accumulation of scores assigned to specific responses to the individual items comprising the index. In other words, it can be the total number of questions on a construct (you could add up the scores from each question).  An index can be measured at an interval or ratio level, which improves reliability and validity as it uses multiple indicators. 


A scale is constructed through the assignment of scores to response patterns among the several items comprising the scale. It recognizes that some items reflect a relatively weaker degree of the variable.  So a scale captures the intensity or direction of a variable by arranging responses on a continuum.  Scales are usually ordinal. 


Taking an example of sexism, if using an index, you just add up the number of prejudiced statements each respondent agreed with. A scale takes into account that agreeing with "women are different to men" is weak evidence of sexism in comparison to agreeing with "women should not be allowed to vote". This takes advantage of any intensity structure that many exist among attributes.


You can combine items in a Likert scale to produce a composite index if all items measure a single construct.  A Likert scale is always ordinal, even if you use 10-40 items, it is still 4 categories.  Distances between categories are always ordinal, they are not intervals just because numbers are assigned.  


Good practice is to switch directions in questions to avoid response set - i.e. someone always agrees no matter what the question!   You can use -2 to +2, where 0 is neutral and the signs -+ can help denote negative or positive feelings for your respondents.     


Factor analysis helps to construct indexes, test the unidimensionality of scales, assign weights to items in an index and statistically reduce a large number of indicators to a smaller set.  


Pretesting the Questionnaire

  1. Pretest with a convenience sample.
  2. Usually involves personally adminstering it.
  3. Ask participants to interpret each question in their own words
  4. Ask participants for their thoughts, questions and ideas about the questionnaire.
  5. Observe interviewer and interviewee relationship.


Piloting the Questionnaire

This involves conducting the questionnaire under simulated or actual research project conditions.


Response Rates

Anything below 50% is poor and anything over 90% is excellent.  Below 75% survey results can differ significantly from what they would be if everyone answered.  

For telephone surveys, a non-contact rate of 20% is common. 


Note Survey Burnout or Feedback Fatigue

There has been a rise in survey non-response rates over the past ten years. By non-response we mean people who don't take part in surveys, who don't respond. This is different from non-item response, whereby someone will answer some questions and not others. It sometimes is used interchangeably with refusal rates but often refusal rates are included in non-response rates as 'refusal' denotes actively saying no (and researchers have a means to capture some personal details so we know who's likely to refuse) whilst non-response could be people who aren't bothered, are too busy or have other non-compliant reasons. 


There are studies being done on the reasons for this increase such as looking at the effects of interviewers as interviewer attributes have been associated with higher non-response rates in panel and cross-sectional surveys. Interviewers play an important role in introducing the survey concept, engaging the respondent, addressing any queries and gaining responses. Some surveys now build into their design mechanisms to reduce non-response at the design state or data collection by reducing the influences of the interviewer. This can be done through effective policies and management strategies at the research agency. There are also area effects on non-response - meaning some geographical areas have higher non-response rates suggesting that similarities of socio-economic, cultural or other factors are at play. At the end of the day, keep your survey method quick and easy to use and ensure that the person filling it out knows that their input will help decision making, result in improvements or be invaluable information


Further Reading

Aldridge & Levine. Surveying the Social World (Oxford University Press, 2001).

Babbie, E. Survey Research Methods (Wadsworth, 1990).

Babbie, E. The Practice of Social Research (Wadsworth, 1995).

Baker, T. Doing Social Research. 2nd Edition. (McGraw-Hill, 1994).[Chapter 7].

Bernard, R. Social Research Methods – Qualitative and Quantitative Approaches (Sage, 2000).

Bowling, A. Research Methods in Health. (Open University Press, 2002).

Boynton, P. 2004. “Administering, analysing and reporting your questionnaire”. BMJ:2004:328:1372-1375.

Boynton, P. & Greenhalgh, T. 2004. “Selecting, designing and developing your questionnaire”. BMJ:2004:328:1312-1315.

Bryman, A. Social Research Methods (Oxford University Press, 2001). [Chapters 3, 4, 5, 6, & 7, all of which address survey design].

Burns, R. B. Introduction to Research Methods (Sage, 2000).

De Vaus, D. Research Design in Social Research (Sage, 2001).

Gilbert, N. Researching Social Life (Sage, 2001). [Chapters 6 & 7 on questionnaires and measuring attitudes].

Leedy, P.D. Practical Research: Planning and Design (Prentice-Hall, 1997).

Neuman, W.L. Social Research Methods: Qualitative and Quantitative Approaches. (Allyn & Bacon, 1997). [Chapter 6 on research design, chapter 7 on quantitative measurement, chapter 9 on sampling and chapter 10 on survey design].

Peterson, R. A. Constructing Effective Questionnaires (Sage, 2000).

Seale, C. (Ed). Researching Society and Culture (Sage, 2001).

Wright, K.Researching Internet-Based Populations: Advantages and Disadvantages of Online Survey Research, Online Questionnaire Authoring Software Packages, and Web Survey Services.Journal of Computer Mediated Communication:2005:10(3): 1083-6101.