Summary of a review of Child and Youth survey methods

1. Introduction

The primary goals of the full review were:

  • To explore the methodical problems brought to light by European child and youth surveys;
  • To describe the strengths and weaknesses of the methods applied in those surveys;
  • To analyse methodical challenges linked to the objectives of those surveys;
  • To collect methodical innovations employed to overcome difficulties.

Key findings are organised in the following categories:

  • Study design and protocols
  • Measurement issues
  • Validity/Reliability issues
  • Sample recruitment issues
  • Attrition issues
  • Respondent behaviour issues
  • Issues of statistical analysis

This briefing is designed to describe each of these issues and highlight recent work that addresses them.

2. Study designs and protocols

A key design choice is between longitudinal or cross-sectional designs. Different designs have different strengths and weaknesses depending upon the research questions and the context. For example, one of the most common objectives of longitudinal surveys is the analysis of gross change. Lynn (2009) notes that although repeated cross-sectional surveys can be used to estimate net change only a longitudinal survey can identify the extent of the gross change. However in some scenarios longitudinal surveys might lead to potential methodological biases if respondents learn how to answer questions and thus influence the results.

The relative advantages of any design depend on the aims of a survey. But the decision also has to be based on issues such as budget constraints or resource constraints. Decreasing participation in surveys means that researchers have to make increased efforts to convince sample units to respond. This in turn leads to spending higher budgets. Temporary solutions can be found in replacing expensive designs with cheaper ones. That, however, could have a negative influence on the quality of the survey results.

The question of survey protocols is especially important when a survey is conducted in different countries. Iacovou and Lynn (2013) examined the requirements of standardisation for the European Union Survey of Income and Living Conditions (EU-SILC). They argued that in that situation the adjustment of the outputs of the surveys (comparable results) instead of the inputs (conform study conditions) would produce better results.

3. Measurement issues

Developing Instruments

Meeting the requirements of a particular study often involves the development of new instruments. Studies among children and young people often demand methods that differ from those aiming at adult respondents. Alt et al. (2004) for example developed a unique questionnaire in which question blocks and game blocks alternated each other in order to keep up the interest of respondents. Fuchs (2008) also applied visual elements in his child questionnaire and investigated how different ordering of questions and other structural variations effected the results. Brković and colleagues (2012) experimented with vignettes as a type of an alternative measure for attitude research among children and young people.

The needs of historical comparisons in Europe create another type of methodological challenge. Koroleva and colleagues (2014) encountered the problem that some of the measurements used in previous studies – based on some particular concepts and categories became irrelevant or obsolete when Latvia became independent from the former Soviet Union. The political changes made comparisons between past and present study results difficult, if not impossible. When designing follow-up studies, new devices had to be created or previous ones remodelled thus enabling the researchers to compare information, despite the political/historical differences.

A less fundamental but nevertheless important problem is consistency of wording in different waves of a study.

Studies of the interview situations

The circumstances of the interview may have effects on the results. It is an important challenge for the researchers to detect and filter out such effects or to find research techniques that can reduce the resulting distortions. Fuchs (2008), for example, analysed the effects of the presence or absence of the parents during child surveys. He found that asking the parent to fill in another questionnaire parallel to the child's interview allowed the child to answer without parental interference. As another example, Rimac et al. (2010) recorded the interviewers' accounts about the interview situations and these accounts were also included as variables in the analysis.

Innovative modes of data collection

Several new techniques of data collection and response recording are being utilized. The most significant changes relate to the use of computers and the Internet. The following studies illustrate recent innovations in data collection.

Gerich and Bergmair (2008) in Germany studied the usefulness of the video-CASI (Computer-Assisted Self Interviewing) method among children. The same questionnaires were used in face-to-face interviews, in self-administered PAPI (paper-and-pencil) situations, and on the WEB with video-CASI method. These studies focused on the problem of maintaining the children's attention during the whole interview and the quality of data collected.

Vogl (2012) compared the results of qualitative interviews conducted face-to-face and by telephone. This was a feasibility study of telephone interviews of children with specific age-related implications (i.e. with different verbal, interactive and cognitive skills). In the study, children's diverse abilities with respect to introducing, describing, free and assisted associations, projections and sensitive questions were taken into account.

Borgers and colleagues (2003) studied another aspect of various interview modes: they focused on the response stability of children in different data collection situations. They found that the CATI method (Computer Assisted Telephone Interviewing) resulted in the greatest response stability.

4. Validity and reliability

The study of validity and reliability of different data collection methods as well as newly developed or adapted instruments are closely related to measurement issues. In surveys with certain target groups and certain behaviours (especially risk behaviour and delinquency) the correct assessment of the reliability of the answers is crucial.

Köllisch and Oberwitter (2004) compared the reliability of self-reported behaviours. They asked boys about their delinquent behaviour at home and in school and then cross-checked the answers with police records.

Trapencieris et al. (2012) studied the frequency of risk behaviours in the framework of an international longitudinal study (ESPAD). The researchers faced not only the challenge of intentional false responses but also the problem of unintentional errors due to translation and interpretation of the questions. They found that some items in the questionnaire had to be adjusted and modified. To improve the comparability and validity of the items, they had to be validated prior to the survey field work. To achieve this, the researchers organised focus group discussions with young people of the target age groups.

Borgers and Hox (2001) conducted a meta-analysis of five data sets in order to establish the response reliability of children. They examined the characteristics of the questions as well as the characteristics of the children. They found that younger and cognitively less sophisticated children produced less reliable responses than older children. Also, girls gave more consistent responses than boys.

Validation of research instruments (i.e. questions, scales, and questionnaires) constitutes one of the fields of survey methodology studies. Using specific validation procedures researchers can develop test batteries that enable them to compare the results in different population samples (in different age groups, or indeed in different countries). Galíndez and Casas (2010) and Navarro (2014) described their validation procedures during which they translated and adapted two English language life-satisfaction questionnaires for use in Spanish samples. Similarly, Trapencieris el al. (2012) presented the validation procedure of the ESPAD questionnaire for use in Latvia.

Dawidowsky (2004) used the qualitative method of focus group interview to validate the results of a quantitative survey in Croatia. Reporting on different waves of the Understanding Society series, Boreham and colleagues (2012) pointed out that although skilled translators transcribed the questionnaires to foreign languages, non-observation errors become apparent due to interpretation problems.

5. Sample recruitment

The documents of this group focused on the challenges of sampling methods among different target groups.

Villalba and colleagues (2011) tested several recruitment techniques among young regular cocaine and other drug users. They introduced an innovative recruitment method based on selection and analysis of the characteristics of young drug users in 3 cities of Spain. As a result, they combined the strategies of computer administered and partially self-administered questionnaires for a survey of a broader scope. The methodology and the fieldwork gave an in-depth analysis of the specific population.

Trapencieris et al. (2011) also studied young drug users in Latvia. They applied the snowball method and won the support of members of the user groups for participating as interviewers. Another innovation was the continuous re-adaptation of the questionnaire based on the accumulation of experience and information during the study.

6. Attrition

Attrition is one of the basic challenges of longitudinal surveys (Lynn 2003, 2009, 2013).

The base-line samples of such surveys are typically greater than the follow-up samples (waves), especially in certain target populations or study periods. Fumagalli and colleagues (2013) conducted a meta-analysis of several surveys in relation to the methods for controlling attrition.

A series of articles describing methodical innovation in the British "Understanding Society" panel survey focused on the experimental methods of attrition control (see for instance Jäckle et al., 2013). Especially interesting is the problem of incentives and the influences of the efforts of interviewers to keep respondents in the sample.

Millova (2012) pointed out that in Slovakian youth surveys, the risk of sample attrition is the highest in the 16–18 year age.

Burton and colleagues (2006) assessed two well-known longitudinal surveys from the point of view of the long term effectiveness of refusal conversion procedures in terms of sample sizes, sample composition and data quality. They concluded that personal interview was more effective in conversion of respondents than telephone contacts.

Attwood and Croll (2011) described a method of attrition control that allowed for new participants to be recruited to replace drop-outs in one of the waves of the LSYPE survey.

Krumins (2007) tried out different methods to maintain contact with young people in Latvia in order to reduce sample attrition. The main innovation of this study was the way the sampling frame had been constructed. The study aimed to interview graduates of universities and vocational secondary education. In Latvia, there is no single personal data bank that would contain graduates' contact information or place of residence. So for the purpose of this study, different data mining techniques were used. The most innovative approach, however, was to incorporate the Latvian version of Facebook.

To enable the study of the high school career of socially and ethnically disadvantaged students by controlling the disproportionate attrition among them, Kertesi and Kezdi (2008) overrepresented the target group at the baseline. At the same time, they used weighting in the analysis in order to correct the proportions of the sample.

7. Respondents' behaviour

The carefully designed study of Galesic (2006) examined how the structure and length of the questionnaire, the order of the thematic blocks, and the time needed to complete the questionnaire influence respondent behaviour. The study concluded that respondents who dropped out often expressed lower interest and experienced a greater burden than the respondents who stayed. On the other hand, Lynn (2009) studied the relationship between the length of the interview and the uncooperative behaviour of the respondents and found that longer interviews might not affect subsequent survey participation propensity.

In the cases of certain questions and topics, traditional techniques such as questionnaires, tests and interviews may not provide reliable information because the respondents react to some presumed social expectation. This can be reflected for instance in the tendency to agree (the choice of "yes" answers instead of "no" answers), or to conform to certain values. These tendencies may disguise the real thinking of the respondents and thus distort the quality of information. In different waves of a longitudinal survey Perra and colleagues (2012) asked students about life- time substance abuse and compared the responses of the same students at different ages. They found that in later waves students acknowledged less substance abuse than before. The researchers suggested the decrease of life-time substance abuse was the consequence of the older students' greater awareness of social expectations – the negative judgement of drug abuse – and they responded accordingly.

Another aspect of respondent behaviour in longitudinal surveys is that when waves follow each other relatively closely, the proportion of non-response may increase. Borgers et al. (2003) suggested that respondents might feel some questions redundant because they remember the questions from previous inquiries. Howieson et al. (2008) found that shorter survey waves over a larger time period halved the research intensity.

Youth surveys using web-based questionnaires are becoming more common. However, this method is quite challenging in terms of inadequate information on respondent behaviour.

8. Statistical analysis

Sometimes statistical analysis can help address methodical problems of child and youth surveys.

An important type of methodical challenge is the correction of measurement errors using statistical procedures. Fuchs (2004) compared different measurement errors in a quantitative survey that focused on cognitive skills of children, adolescents and young adults. The author examined question-order effects, answer-order effects, scale effects, and effects of the introduction of numerical values, in three self-administered studies that were conducted with children and young people in Bavaria.

Hall and colleagues (2013) compared three health-related longitudinal surveys (Health Survey for England, the British Social Attitudes Survey, and the Family Resources Survey) in order to measure advantages and disadvantages of the efforts to maintain respondents in the sample versus the correction of non-response bias and attrition effects by statistical weighting procedures. The authors found that weighting could be beneficial in the case of health variables (with the exception of smoking behaviour) and demographic variables (with the exception of housing tenure), but not for attitudinal variables.


Alt, C., Schneider, S., & Steinhübl, D. (2004): The DJI Children Panel – Theory, Design and Contents' Focus Points, in: Zeitschrift für Familienforschung, 16(2): 101-110.

Attwood, G., & Croll, P. (2011). Attitudes to school and intentions for educational participation: an analysis of data from the Longitudinal Survey of Young People in England. International Journal of Research & Method in Education, 34(3), 269-287.

Boreham, R., Boldysevaite, D., & Killpack, C. (2012). UKHLS Wave 1 Technical Report. NatCen, London.

Borgers, N., & Hox, J. J. (2000, October). Reliability of responses in questionnaire research with children. In fifth international conference on logic and methodology, Cologne, Germany.

Borgers, N., Hox, J., & Sikkel, D. (2003). Response quality in survey research with children and adolescents: the effect of labeled response options and vague quantifiers. International Journal of Public Opinion Research, 15(1), 83-94.

Brković, I., Keresteš, G., & Kuterovac-Jagodić, G. (2012).Comparison of Cross-Sectional and Longitudinal Approach to Assessment of Self-Regulation Development in Early Adolescence.Psihologijsketeme, 21(2), p. 273-297.

Burton, J., Laurie, H., & Lynn, P. (2006). The long‐term effectiveness of refusal conversion procedures on longitudinal surveys. Journal of the Royal Statistical Society: Series A (Statistics in Society), 169(3), 459-478.

Dawidowsky, D. (2004).Testing validity of focus groups method by comparison with survey results. Undergraduate Thesis. Filozofskifakultet Sveučilišta u Zagrebu.

Fuchs, M. (2004): Children and Young People as Respondents. Field Experiments on Interviewing Minors, in: ZUMA-Nachrichten, 26(54): 60-88.

Fuchs, M. (2008): Standardised Interviews With Children. Referring to Impact of Questions' Difficulties and Children's Cognitive Ressources on Data Quality, in: K.-S. Rehberg (Ed.), The Nature of Society. Paper for the 33. Congress of the Deutsche Gesellschaft für Soziologie, Kassel 2006, CD-Rom, Frankfurt a.M./New York: Campus, 1-17.

Fumagalli, L., Laurie, H., & Lynn, P. (2013). Experiments with methods to reduce attrition in longitudinal surveys. Journal of the Royal Statistical Society: Series A (Statistics in Society), 176(2), 499-519.

Kertesi, G. & Kézdi G. (2008) Enrollment of the Roma and non-Roma children in secondary school. The first results of TARKI Educatio-Career Survey. In: Kolosi T.-Tóth I.Gy. (eds.): Social Report 2008. 2008. pp. 344-362.

Galesic, M. (2006). Dropouts on the web: Effects of interest and burden experienced during an online survey. Journal Of Official Statistics-Stockholm-, 22(2), 313.

Galíndez, E., & Casas, F. (2010). "Adaptación y validación de la Students' Life Satisfaction Scale (SLSS) con adolescentes" [Adaptation and validation of the Students' Life Satisfaction Scale (SLSS) with adolescents]. Estudios de Psicología: Studies in Psychology, 31(1): 79-87.

Gerich, J.,, & Bergmair, F. (2008):The Applicationof Video- enhancedSelf-administered Computer Interviews forSocial Research WithChildren, in: Zeitschrift für Soziologie der Erziehung und Sozialisation, 28(1): 56-74.

Hall, J., Brown, V., Nicolaas, G., & Lynn, P. (2013). Extended Field Efforts to Reduce the Risk of Non-response Bias Have the Effects Changed over Time? Can Weighting Achieve the Same Effects?. Bulletin of Sociological Methodology/Bulletin de Méthodologie Sociologique, 117(1), 5-25.

Howieson, C., Croxford, L., & Howat, N. (2008). Meeting the needs for longitudinal data on youth transitions in Scotland: an options appraisal: a joint report to the Scottish Government Schools Directorate and Lifelong Learning Directorate.

Iacovou, M., & Lynn, P. (2013). Implications of the EU-SILC following rules, and their implementation, for longitudinal analysis (No. 2013-17). ISER Working Paper Series.

Jäckle, A., Lynn, P., & Burton, J. (2013). Going Online with a Face-to-Face Household Panel: Initial Results from an Experiment on the Understanding Society Innovation Panel (Vol. 3). Understanding Society Working Paper Series No.

Köllisch, T., & Oberwittler, D. (2004): How honest do male youths report on their delinquency? Resultsof an External Validation Study, in:Kölner Zeitschrift für Soziologie und Sozialpsychologie, 47: 708-735.

Koroļeva, I., Mieriņa, I., & Rungule, R. (2014) Profesiju prestišs un izvēle jauniešu vidū: divu paaudžu salīdzinājums . Riga: LU Akadēmiskais apgāds. [Occupational prestige and Professional Choices of Youth: Comparison of Two Cohorts.]

Krūmiņš, J. (2007) Augstāko un prefsionālo mācību iestāžu absolventu profesionālā darbība pēc mācību beigšanas ["Professional Activities of Graduates of Higher and Vocational Education Institutions after Graduation"]

Lynn, P. (2003). Developing quality standards for cross-national survey research: five approaches. Int. J. Social Research Methodology, 6(4), 323-336.

Lynn, P. (2009). Methods for longitudinal surveys. Methodology of longitudinal surveys, 1-19.

Lynn, P. (2013). Longer Interviews May Not Affect Subsequent Survey Participation Propensity. Institute for Social and Economic Research.

Millová, K. (2012). Úspešnývývin v kontexte psychológie celoživotnéhovývinu (Successfull Development in the Context of Psychology of Life-Long Development).Dissertation thesis. Brno: Masarykova Univerzita.

Navarro, E., Expósito, E., López, E., & Thoilliez, B. (2014). "EPIBI: Escala de Percepción de Indicadores de Bienestar".

Perra, O., Fletcher, A., Bonell, C., Higgins, K., & McCrystal, P. (2012). School-related predictors of smoking, drinking and drug use: Evidence from the Belfast Youth Development Study. Journal of adolescence, 35(2), 315-324.

Rimac, I., Zorec, L., & Ogresta, J. (2010). Analysis of survey response rate in the European values study. Društvenaistraživanja 1-2(105-106), p. 47-67.

Trapencieris, M., Sniķere, S., & Kaupe, R. (2011) Narkotiku lietošanas tendences un paradumi Latvijā. [Trends and habits of drug use in Latvia] Rīga: Veselības ekonomikas centrs.

Trapencieris, M., Sniķere, S., Koroļeva, I., & Kārkliņa, I. (2012) ESPAD 2011: Atkarību izraisošo vielu paradumi un tendences skolēnu vidū. Rīga: Slimību un profilakses centrs. [Habits and trends of drug use among students]

Villalba, J., Suelves, J. M., Saltó, E., i Cabezas, C. (2011). "Valoración de las encuestas a adolescentes sobre consumo de tabaco, alcohol y cannabis en España" [Assessment of surveys of adolescents about smoking and the use of alcohol and cannabis in Spain]. ADICCIONES, 23(1): 11-16.

Vogl, S. (2012): Age and Methods. A Comparison of Guided Interviews With Children by Telephone and Face-to-Face, Wiesbaden: VS Verlag.

Language Option