Understanding the Validity of Measures for SEM Model

After screening your data and assessing if the measures are reliable, you need to examine the validity of your constructs and indicators. There are numerous validity tests that a researcher needs to be aware of to support the legitimacy of their findings. Before moving on, I want to initially introduce what validity means, and specifically the idea of construct validity, or the assessment of a construct and its measures that include content, predictive, convergent, and discriminant validity. I will talk more in detail about how to assess these validities in a con- firmatory factor analysis (Chapter 4), but here is a quick overview of the topics.

Content Validity—also known as face validity, this validity assesses if the indicators rep- resent the construct of interest. Are you capturing the unobserved variable? With a sufficient number of indicators, a researcher can make this argument. If it is a new construct and the researcher includes a small number of indicators (like two), you set yourself up for criticism that you may not have achieved content validity. Content validity is often an “eyeball” test by simply assessing if these indicators even attempt to measure the unobserved construct on face value.This type of validity is focused on appearance. Do the indicators asked in a survey appear to measure the specified construct? One could argue that this is a superficial assessment, but it is only one of many steps to determine the validity of a construct.

Convergent Validity—this type of validity determines if the indicators for a construct are all measuring the “same” thing. Do all indicators “converge” in measuring this construct? A lack of convergent validity notes that your indicators are weakly measuring your construct or that your indicators are actually a better measure for a separate and maybe similar construct. Discriminant Validity—this involves a set of indicators presumed to measure a con- struct and that differentiate from other constructs. This can be problematic with constructs that have multicollinearity or a high correlation between constructs. For instance, if the con- structs of Speed and Efficiency were trying to be measured and the final analysis showed that these two constructs had a correlation of .90, you could make the argument that you are not actually measuring two different things (very few distinguishing characteristics between the measurements). In essence, discriminant validity assesses if your construct is distinct and dif-ferent from other potential constructs of interest.

Predictive Validity—does the construct actually predict what it is supposed to be predicting?

Highlights of the Chapter:

  1. Data screening is important before analyzing your data in Problematic data will lead to problematic (or even inaccurate) results.
  2. Data screening concerns are: respondent abandonment, respondent misconduct, and impermissible values.
  3. Missing data usually takes three forms: missing complete at random, missing at random, and missing not at random.
  4. Addressing missing data can be accomplished by deletion or imputation.
  5. Imputation often takes place with series mean, linear interpolation, and regression methods
  6. The reliability, or the consistency of responses to indicators, needs to be assessed. The most popular way is to calculate Cronbach’s alpha. A Cronbach’s alpha value that is greater than .70 denotes an acceptable amount of reliability.
  7. Identification in a SEM model is the difference of observations in the covariance matrix and the number of parameters to be estimated. Identification is often discussed in “degrees of freedom”.
  8. Under-identified models have a degrees of freedom less than zero. Just-identified models have a degrees of freedom that equals zero. An Over-Identified model has a degrees of freedom greater than 0.
  9. SEM requires a larger sample size than other statistical techniques.The sample size needed depends on the complexity of the model and the desired level of power.
  1. Understanding the validity of your measures is important. Four important validity concerns are content validity, convergent validity, discriminant validity, and predictive validity.

Source: Thakkar, J.J. (2020). “Procedural Steps in Structural Equation Modelling”. In: Structural Equation Modelling. Studies in Systems, Decision and Control, vol 285. Springer, Singapore.

Leave a Reply

Your email address will not be published. Required fields are marked *