[05/14/2024] Validity Coding

Validity and Reliability Reporting Practices in the Field of Health Education and Behavior: A Review of Seven Journals

Abstract

Health education and behavior researchers and practitioners often develop, adapt, or adopt surveys/scales to quantify and measure cognitive, behavioral, emotional, and psychosocial characteristics. To ensure the integrity of data collected from these scales, it is vital that psychometric properties (i.e., validity and reliability) be assessed. The purpose of this investigation was to (a) determine the frequency with which published articles appearing in health education and behavior journals report the psychometric properties of the scales/subscales employed and (b) outline the methods used to determine the reliability and validity of the scores produced. The results reported herein are based on a final sample of 967 published articles, spanning seven prominent health education and behavior journals between 2007 and 2010. Of the 967 articles examined, an exceedingly high percentage failed to report any validity (ranging from 40% to 93%) or reliability (ranging from 35% to 80%) statistics in their articles. For health education/behavior practitioners and researchers to maximize the utility and applicability of their findings, they must evaluate the psychometric properties of the instrument employed, a practice that is currently underrepresented in the literature. By not ensuring the instruments employed in a given study were able to produce accurate and consistent scores, researchers cannot be certain they actually measured the behaviors and/or constructs reported.

  • Intro

    • What Is Validity?

    • What Is Reliability?

    • Is Validity or Reliability More Important?

    • Why Are Validity and Reliability Necessary?

    • Current Investigation

  • Method

    • Inclusion/Exclusion Criteria

    • Sample

  • Results

  • Discussion

The purpose of this investigation was to

  • (a) determine the frequency with which published articles appearing in health education and behavior journals report the psychometric properties of the scales/subscales employed

  • and (b) outline the methods used to determine the reliability and validity of the scores produced.

Conclusion

  • Of the 967 articles examined, an exceedingly high percentage failed to report any validity (ranging from 40% to 93%) or reliability (ranging from 35% to 80%) statistics in their articles.

Current Investigation As a result of this constellation of factors, this article seeks to accomplish the following aims:

  • (a) determine the frequency with which published articles appearing in health education and behavior journals report the psychometric properties of the scales/subscales employed

  • and (b) outline the methods used to determine the reliability and validity of the scores produced.

Coding process- In brief, each reviewer assessed

  • (a) if articles provided validity and reliability statistics,

  • (b) if the statistics were from a previous administration of the instrument or the current sample,

  • (c) how validity and reliability were assessed,

  • and (d) what statistics were provided.

Type of Scale Being Assessed

  • Newly created scale

  • Previously developed scale

  • Adapted version of previously published scale

Type of validity reported

  • Content

    • Expert panel

    • Pilot testing

    • Literature review

    • Cognitive interviews

  • Construct

    • Factor analysis

    • Correlation coefficient

    • Chi-square

  • Face

    • Correlation coefficient

    • Expert panel

    • Factor analysis

  • Predictive

    • Correlation coefficient

    • Logistic regression

  • Criterion

    • Correlation coefficient

    • Factor analysis

The type of reliability reported

  • Internal consistency

    • Alpha coefficient

    • Correlation coefficient

    • Kappa coefficient

  • Test-retest

    • Correlation coefficient

    • Alpha coefficient

    • Kappa coefficient

  • Interobserver

    • Kappa coefficient

      • Correlation coefficient

  • Parallel form of reliability

    • Alpha coefficient

    • Correlation coefficient

Construct Validation in Social and Personality Research: Current Practice and Recommendations

Abstract

The verity of results about a psychological construct hinges on the validity of its measurement, making construct validation a fundamental methodology to the scientific process. We reviewed a representative sample of articles published in the Journal of Personality and Social Psychology for construct validity evidence. We report that latent variable measurement, in which responses to items are used to represent a construct, is pervasive in social and personality research. However, the field does not appear to be engaged in best practices for ongoing construct validation. We found that validity evidence of existing and author-developed scales was lacking, with coefficient a often being the only psychometric evidence reported. We provide a discussion of why the construct validation framework is important for social and personality researchers and recommendations for improving practice.

Research Questions

What types of measures are social and personality researchers using?

How often do authors report a previous validation study?

How often do authors report psychometric information?

  • Intro

    • Purpose of Study

    • Construct Validation

  • Method

    • Sampling and Data Sources

    • Coding of Articles

  • Results

    • Types of Measures Used

    • Validity Evidence Reported

  • Discussion

    • On the Fly Measurement

    • The Importance of Ongoing Validation

    • Big Theories, Small Scales

    • Limitations of a

  • Conclusions and Recommendations

Purpose of study

  • Thus, we set out to determine to what extent researchers are utilizing rigorous methodology for construct validation.

  • Prior to reporting results from our review, we briefly review the established standards for generating validity evidence of measures, reiterating the fundamental role of construct validity in strengthening the conclusions drawn from psychological research.

Specifically, we aimed to answer the following questions:

  • 1. What types of measures are social and personality researchers using?

  • 2. How often do authors report a previous validation study?

  • 3. How often do authors report psychometric information?

Coding

  • we focus on how researchers engaged in ongoing construct validation, specifically the validity evidence from the structural phase reported in “Method” section.

  • Common approaches to this phase of construct validation are listed in Table 1

  • We focused on Method section because that is where the primary variables of interest and their psychometric properties (e.g., factor analysis or reliability) are typically described.

  • Accordingly, our results exclude substantive or external construct validity evidence (e.g., theoretical breadth or predictive validity) possibly present in other sections.

  • Additionally, we did not code for validity evidence of manipulation checks or measures that were not used in the final analysis.

  • We coded the frequency of reported evidence for each measure, which were objective observations (e.g., number of items on a scale or presence of a reliability coefficient), as opposed to subjective judgments.

Results

  • Types of Measures Used

  • Validity Evidence Reported

    • Use of existing scales.

    • Psychometric information

    • Reliability coefficients.

Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them

Abstract

In this article, we define questionable measurement practices (QMPs) as decisions researchers make that raise doubts about the validity of the measures, and ultimately the validity of study conclusions. Doubts arise for a host of reasons, including a lack of transparency, ignorance, negligence, or misrepresentation of the evidence. We describe the scope of the problem and focus on how transparency is a part of the solution. A lack of measurement transparency makes it impossible to evaluate potential threats to internal, external, statistical-conclusion, and construct validity. We demonstrate that psychology is plagued by a measurement schmeasurement attitude: QMPs are common, hide a stunning source of researcher degrees of freedom, and pose a serious threat to cumulative psychological science, but are largely ignored. We address these challenges by providing a set of questions that researchers and consumers of scientific research can consider to identify and avoid QMPs. Transparent answers to these measurement questions promote rigorous research, allow for thorough evaluations of a study’s inferences, and are necessary for meaningful replication studies.

  • Intro

  • Questioning the Validity of Psychological Science

  • Questionable Measurement Practices

  • QMPs Threaten All Aspects of Validity

  • Using Questions That Promote Validity of Measure Use

    • How to use these questions

    • 1 - What is your construct?

    • 2 - Why and how did you select your measure?

    • 3 - What measure did you use to operationalize the construct?

    • 4 - How did you quantify your measure?

    • 5 - Did you modify the scale? And if so, how and why?

    • 6 - Did you create a measure on the fly?

  • Summary

Last updated