Survey Measurements
Techniques, Data Quality and Sources of Error
(Sprache: Englisch)
Wissenschaftliche Umfragen können keine aussagekräftigen Ergebnisse liefern, wenn ihre Datenqualität durch fehlende oder verfälschte
Antworten beeinträchtigt wird. Eine Herausforderung der Sozialforschung besteht darin, solche Fehlerquellen zu erkennen und...
Antworten beeinträchtigt wird. Eine Herausforderung der Sozialforschung besteht darin, solche Fehlerquellen zu erkennen und...
Leider schon ausverkauft
Buch (Kartoniert)
- Lastschrift, Kreditkarte, Paypal, Rechnung
- Kostenlose Rücksendung
Produktdetails
Produktinformationen zu „Survey Measurements “
Wissenschaftliche Umfragen können keine aussagekräftigen Ergebnisse liefern, wenn ihre Datenqualität durch fehlende oder verfälschte
Antworten beeinträchtigt wird. Eine Herausforderung der Sozialforschung besteht darin, solche Fehlerquellen zu erkennen und zu kontrollieren. Der Band präsentiert Erkenntnisse und Methoden zur Behandlung von Unit Nonresponse, Missing Data und verschiedene Arten von Messfehlern im Kontext von Web und Mixed-Mode Panel, Mobile Web und Faceto-Face-Befragungen.
Antworten beeinträchtigt wird. Eine Herausforderung der Sozialforschung besteht darin, solche Fehlerquellen zu erkennen und zu kontrollieren. Der Band präsentiert Erkenntnisse und Methoden zur Behandlung von Unit Nonresponse, Missing Data und verschiedene Arten von Messfehlern im Kontext von Web und Mixed-Mode Panel, Mobile Web und Faceto-Face-Befragungen.
Klappentext zu „Survey Measurements “
Wissenschaftliche Umfragen können keine aussagekräftigen Ergebnisse liefern, wenn ihre Datenqualität durch fehlende oder verfälschteAntworten beeinträchtigt wird. Eine Herausforderung der Sozialforschung besteht darin, solche Fehlerquellen zu erkennen und zu kontrollieren. Der Band präsentiert Erkenntnisse und Methoden zur Behandlung von Unit Nonresponse, Missing Data und verschiedene Arten von Messfehlern im Kontext von Web und Mixed-Mode Panel, Mobile Web und Faceto-Face-Befragungen.
Lese-Probe zu „Survey Measurements “
1. IntroductionUwe Engel
1.1 Data Quality
Surveys are important for society. They are frequently conducted and useful sources of public opinion and decision making. Although even out-comes of high-quality surveys are not safe from being misinterpreted, ei-ther inadvertently or even deliberately, high-quality survey data are likely to reduce this risk. For scientific reasons as well, strictly speaking only high-quality survey data appear acceptable. This is why survey methodology pays so much attention to possible threats to data quality and has been doing so for quite some time (e.g. Biemer and Lyberg 2003, Weisberg 2005).
Why is high data quality so important for survey research? One possible answer to this question may point to the risk of obtaining biased sample estimates of population parameters if a survey fails to cope with relevant sources of survey error. Probability sampling and proper use of statistical estimators alone cannot guarantee unbiased estimates, because even in this case nonresponse and measurement effects may still give rise to bias and error variance.
Accordingly, one core task certainly consists in the development of suitable statistical models and techniques to adjust for nonresponse bias. Even the ideal case of complete (or perfectly nonresponse-adjusted for) response, however, cannot guarantee unbiased samples estimates for a simple reason: Observed responses may deviate from their corresponding true scores due to measurement effects.
Such effects may have different origins, including the survey mode, question wordings, and response formats. In addition to such 'mode' and 'response effects', the 'interviewer' represents a further source of meas-urement error. Of importance is also the 'respondent' insofar as his/her response behavior may differ in relevant aspects. In this respect, typical examples are certainly satisficing behavior and cognitive response styles. Another source of variation is simply that respondents
... mehr
arrive at their re-sponses to survey questions through cognitive processes that may differ in relevant regards. No less important than this, however, is another factor of answering behavior which might be called 'motivated misreporting'.
A working definition of 'high-quality' surveys might thus include the idea that the quality is the higher the more such sources of survey error are effectively controlled for. In doing this, one would adopt the prominent 'total survey error' perspective.
1.2 Sources of Survey Error
1.2.1 Measurement Error
Measurement error may be due to several sources of variation that affect response behavior. Surveys do not yield unobtrusive measurements. In-stead, already the fact per se that respondents are asked questions in the context of research interviews shapes their answering behavior in some ways.
Survey-mode effects
It is well known that different survey modes produce different mean val-ues, other things being equal. It makes a difference whether a finding has been obtained in an interviewer-assisted or self-administered survey mode. For instance, the analysis presented in chapter 10 below exemplifies the typical observation that the web mode tends to produce lower mean values than the telephone mode. 'Lower' means at the same time 'farther away' from an answer the respondent is assumed to believe to be an expected, i.e. socially desirable, answer. In the aforesaid analysis, this is the assumed expectation of presenting oneself as currently satisfied with one's life. If posed in direct communication, the mere presence of an interviewer gives rise to a kind of 'positivity bias' (Tourangeau et al. 2000, 240f.) and this bias in turn to a comparatively higher mean value than observed in the opposite case of self-administered survey modes. This is just an example of a kind of measurement effect which is usually termed 'mode effect'.
Response effects
Other measurement effects are called response effects and evolve from the way questions are worded and response formats are styled. Experimental research shows, for example, that different response distributions arise from different response formats of closed-ended survey questions, other things being equal (e.g. Engel et al. 2012, 286ff.). In the present volume, particular attention is paid to open-ended questions and possible framing effects.
Open-ended questions
Open-ended questions allow the formulation of answers in the respond-ent's own words. This leads to more or less content and thus to the need of properly analyzing this content. Nowadays, content analysis certainly ranks as one of the methods of growing importance in social research. Not only the sheer amount of content provided through web sites and social media is likely to contribute to this development. The analysis of open-ended questions in surveys is a challenging task, too. This becomes evident from the fact that verbatim answers represent more or less unstructured text material from which the survey researcher has to extract meaningful information and structure. In this respect, the usual approach is theory-driven and implies having to master the task of coding the answers properly. Accordingly, there exists a strong research interest in accomplishing this task as error-free as possible. For this reason, additional insights into the structure of verbatim answers may be gained by complementing this theory-driven approach to coding verbatim responses by data-driven techniques of revealing hidden structures. In the present volume, however, the challenge preceding any coding attempt is not addressed.
Chapter 3 deals with open-ended survey questions. First of all, Sturgis and Luff discuss some merits of this type of question (e.g. allowing the respondent to use his or her own frame of reference in answering a ques-tion and the potentially rich informational value of answers to open-ended questions). The authors discuss the role of interviewers as potential sources of error, because interviewers "must type the verbatim answer as the respondent articulates it, often in less than ideal conditions." This makes interviewer transcription, which is the central chapter topic, error prone. The chapter therefore discusses an alternative to letting the interviewers type in verbatim responses. This is 'audio-recording' the responses to open-ended questions (OEQs). As the authors note, "in this chapter we assess the costs and benefits of audio-recording responses to OEQs in the context of a computer-assisted personal (CAPI) survey." Based on random allocations of respondents to the conditions 'audio-recording' versus 'interviewer-typed' in the 2012 Wellcome Trust Monitor survey, the authors examine the data quality in both conditions and discuss audio-recording also with respect to the necessary consent to be audio-recorded.
Open- and closed-ended survey questions combined
From its beginnings, social research has combined different methods. Nowadays, we observe a growing recognition of the idea of 'mixing' methods. Other than the 'mixed-mode' parlance which is so popular in current survey methodology, the talk is usually of 'mixed methods' in order to designate efforts of combining specifically 'qualitative' and 'quantitative' methods. Applied to the narrower survey methodology field and there to within-survey applications, open-ended questions in usually standardized surveys may be regarded as a potential field of application. In this respect, the combination of closed-ended survey questions with relevant open-ended meaning probes and think-aloud probes may prove particularly promising. 'Probing' is by no means a new questioning technique, quite the contrary. It is only remarkable that its 'traditional' place is the pretesting stage of surveys. However, 'probes' are simply meta-questions (in the sense of questions about questions) which we can pose theoretically in the current surveys as well. They are meta-questions pertaining to given questions, in order to clarify how respondents interpret these questions and how they arrive at their answers to these questions. We explored the feasibility of this approach elsewhere (Engel and Köster 2015, 45-47) and were led to find it promising.
A working definition of 'high-quality' surveys might thus include the idea that the quality is the higher the more such sources of survey error are effectively controlled for. In doing this, one would adopt the prominent 'total survey error' perspective.
1.2 Sources of Survey Error
1.2.1 Measurement Error
Measurement error may be due to several sources of variation that affect response behavior. Surveys do not yield unobtrusive measurements. In-stead, already the fact per se that respondents are asked questions in the context of research interviews shapes their answering behavior in some ways.
Survey-mode effects
It is well known that different survey modes produce different mean val-ues, other things being equal. It makes a difference whether a finding has been obtained in an interviewer-assisted or self-administered survey mode. For instance, the analysis presented in chapter 10 below exemplifies the typical observation that the web mode tends to produce lower mean values than the telephone mode. 'Lower' means at the same time 'farther away' from an answer the respondent is assumed to believe to be an expected, i.e. socially desirable, answer. In the aforesaid analysis, this is the assumed expectation of presenting oneself as currently satisfied with one's life. If posed in direct communication, the mere presence of an interviewer gives rise to a kind of 'positivity bias' (Tourangeau et al. 2000, 240f.) and this bias in turn to a comparatively higher mean value than observed in the opposite case of self-administered survey modes. This is just an example of a kind of measurement effect which is usually termed 'mode effect'.
Response effects
Other measurement effects are called response effects and evolve from the way questions are worded and response formats are styled. Experimental research shows, for example, that different response distributions arise from different response formats of closed-ended survey questions, other things being equal (e.g. Engel et al. 2012, 286ff.). In the present volume, particular attention is paid to open-ended questions and possible framing effects.
Open-ended questions
Open-ended questions allow the formulation of answers in the respond-ent's own words. This leads to more or less content and thus to the need of properly analyzing this content. Nowadays, content analysis certainly ranks as one of the methods of growing importance in social research. Not only the sheer amount of content provided through web sites and social media is likely to contribute to this development. The analysis of open-ended questions in surveys is a challenging task, too. This becomes evident from the fact that verbatim answers represent more or less unstructured text material from which the survey researcher has to extract meaningful information and structure. In this respect, the usual approach is theory-driven and implies having to master the task of coding the answers properly. Accordingly, there exists a strong research interest in accomplishing this task as error-free as possible. For this reason, additional insights into the structure of verbatim answers may be gained by complementing this theory-driven approach to coding verbatim responses by data-driven techniques of revealing hidden structures. In the present volume, however, the challenge preceding any coding attempt is not addressed.
Chapter 3 deals with open-ended survey questions. First of all, Sturgis and Luff discuss some merits of this type of question (e.g. allowing the respondent to use his or her own frame of reference in answering a ques-tion and the potentially rich informational value of answers to open-ended questions). The authors discuss the role of interviewers as potential sources of error, because interviewers "must type the verbatim answer as the respondent articulates it, often in less than ideal conditions." This makes interviewer transcription, which is the central chapter topic, error prone. The chapter therefore discusses an alternative to letting the interviewers type in verbatim responses. This is 'audio-recording' the responses to open-ended questions (OEQs). As the authors note, "in this chapter we assess the costs and benefits of audio-recording responses to OEQs in the context of a computer-assisted personal (CAPI) survey." Based on random allocations of respondents to the conditions 'audio-recording' versus 'interviewer-typed' in the 2012 Wellcome Trust Monitor survey, the authors examine the data quality in both conditions and discuss audio-recording also with respect to the necessary consent to be audio-recorded.
Open- and closed-ended survey questions combined
From its beginnings, social research has combined different methods. Nowadays, we observe a growing recognition of the idea of 'mixing' methods. Other than the 'mixed-mode' parlance which is so popular in current survey methodology, the talk is usually of 'mixed methods' in order to designate efforts of combining specifically 'qualitative' and 'quantitative' methods. Applied to the narrower survey methodology field and there to within-survey applications, open-ended questions in usually standardized surveys may be regarded as a potential field of application. In this respect, the combination of closed-ended survey questions with relevant open-ended meaning probes and think-aloud probes may prove particularly promising. 'Probing' is by no means a new questioning technique, quite the contrary. It is only remarkable that its 'traditional' place is the pretesting stage of surveys. However, 'probes' are simply meta-questions (in the sense of questions about questions) which we can pose theoretically in the current surveys as well. They are meta-questions pertaining to given questions, in order to clarify how respondents interpret these questions and how they arrive at their answers to these questions. We explored the feasibility of this approach elsewhere (Engel and Köster 2015, 45-47) and were led to find it promising.
... weniger
Inhaltsverzeichnis zu „Survey Measurements “
ContentsPreface7
Uwe Engel
1. Introduction9
Uwe Engel
2. Motivated Misreporting: Shaping Answers to Reduce Survey Burden24
Roger Tourangeau, Frauke Kreuter, and Stephanie Eckman
3. Audio-recording of Open-ended Survey Questions: A Solution to the Problems of Interviewer Transcription?42
Patrick Sturgis and Rebekah Luff
4. Framing Effects58
Uwe Engel and Britta Köster
5. Estimating and Comparing the Quality of Different Scales of an Online Survey Using an MTMM Approach76
Melanie Revilla and Willem E. Saris
6. Collecting MTMM Data on Satisfaction with Life97
Laura Burmeister and Uwe Engel
7. On the Quality of Web Panels112
Jelke Bethlehem
8. Online Surveys and the Burden of Mobile Responding130
Marika de Bruijne and Marije Oudejans
9. Well-being, Survey Attitudes, and Readiness to Report on Everyday Life Events in an Experience Sampling Study146
Laura Burmeister, Uwe Engel, and Björn Oliver Schmidt
10. Nonresponse, Measurement Error, and Estimates of Change - Lessons from the German PPSM Panel160
Suat Can and Uwe Engel
11. Handling of Missing Data in Statistical Analyses192
Daniel Salfrán and Martin Spiess
12. Multiple Imputation of Overdispersed Multilevel Count Data209
Kristian Kleinke and Jost Reinecke
Contributors227
Subject Index229
Author Index235
Autoren-Porträt
Uwe Engel ist Professor für Soziologie mit dem Schwerpunkt Statistik und empirische Sozialforschung an der Universität Bremen.
Bibliographische Angaben
- 2015, 239 Seiten, 28 Abbildungen, Maße: 13,9 x 21,3 cm, Kartoniert (TB), Englisch
- Herausgegeben:Engel, Uwe;Mitarbeit:Engel, Uwe; Bethlehem, Jelke; Blom, Annelies; Burmeister, Laura; Can, Suat; de Bruijne, Marika; Eckmann, Stephanie; Klausch, Thomas; Kleinke, Kristian; Köster, Britta;
- Herausgegeben: Uwe Engel
- Verlag: CAMPUS VERLAG
- ISBN-10: 3593502801
- ISBN-13: 9783593502809
- Erscheinungsdatum: 11.05.2015
Sprache:
Englisch
Kommentar zu "Survey Measurements"
Schreiben Sie einen Kommentar zu "Survey Measurements".
Kommentar verfassen