Problems with the RMSEA fit index
Arndt Regorz, Dipl. Kfm. & M.Sc. Psychologie, 08/14/2021
If you test a model with an SEM program, i.e. a path model, a confirmatory factor analysis (CFA) or a full structural equation model (SEM), then you must check the model fit before interpreting the results - i.e. whether the model fits the data at all in a meaningful way.
In addition to the chi-square test for (exact) model fit, fit indices are predominantly used for this purpose, which do not assume the (unrealistic) prerequisite of a perfect fit.
Based among others on the seminal paper by Hu and Bentler (1999; cited more than 80,000 times so far according to Google Scholar), the following fit indices are frequently used in the literature: CFI (sometimes TLI, too), SRMR, and RMSEA. For the CFI, Hu and Bentler (1999) considered values above .95 acceptable, for the SRMR values below .08, and for the RMSEA values below .06 (as single fit indices - for the combination of fit indices, Hu and Bentler gave other cut-off values).
However, other cut-off values are also cited in the literature in some cases; for further literature sources, see Kenny (2020).
Problem case: RMSEA
Unfortunately, it has been shown in the meantime that the RMSEA has serious problems with simpler models with few degrees of freedom. This is especially true for simple path models and simple CFAs, which more often have relatively few degrees of freedom. Here, the RMSEA can wrongly indicate a poor fit, even when in fact the model fits the data well (Kenny et al., 2015).
To understand the reason for this problem, one must delve a bit into how the RMSEA is constructed. The RMSEA is an absolute fit index that incorporates model complexity (Hu & Bentler, 1999). To account for complexity, it includes a penalty for few degrees of freedom, so to speak. As a result, models with few degrees of freedom often have poor RMSEA even when they fit the data quite well:
In simulation studies, Kenny et al. (2015) found that for models with few degrees of freedom, it may even be the case that models with a non-significant chi-square test (i.e., no significant difference at all between the model and the data) can still have a poor RMSEA. Their conclusion: "Using the RMSEA to assess the model fit in models with small df is problematic and potentially misleading unless the sample size is very large. We urge researchers, reviewers, and editors not to dismiss models with large RMSEA values with small df without examining other information. In fact, we think that it [sic] advisable for researchers to completely avoid computing the RMSEA when model df are small. In such cases, poor fit can be diagnosed by specifying additional models that include deleted parameters and determining if those additional parameters are needed in the model." (Kenny et al, 2015, p. 503).
What constitutes "few degrees of freedom" in this context also depends significantly on the sample size. In Figure 2 of their article, Kenny et al. (2015, p. 497) indicated for various sample sizes and degrees of freedom, what percentage of their correctly specified model was (incorrectly) rejected based on the acceptance criterion RMSEA <= .10.
Possible solutions
If the case arises with a path model or CFA of yours that has few degrees of freedom, you have two main options to deal with it:
On the one hand, referring to the above recommendation by Kenny et al. (2015), you could refrain from requesting the RMSEA and prove the model fit primarily with CFI (and or TLI) as well as SRMR and, if necessary, perform the further steps recommended there (adding more paths and testing whether they are necessary). However, providing the RMSEA has become almost standard and many thesis advisors or reviewers will simply expect it.
On the other hand, you could give the RMSEA together with CFI and SRMR, but justify with reference to Kenny et al. (2015) that with the small number of degrees of freedom in your model, the RMSEA is not meaningful and therefore you made the decision on the acceptance of the model based on CFI and SRMR, if necessary after prior testing of further pathways.
References
Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1-55. https://doi.org/10.1080/10705519909540118
Kenny, D.A. (2020, June 5). Measuring Model Fit. http://www.davidakenny.net/cm/fit.htmKenny, D. A., Kaniskan, B., & McCoach, D. B. (2015). The performance of RMSEA in models with small degrees of freedom. Sociological Methods & Research, 44(3), 486-507. https://doi.org/10.1177/0049124114543236