While causality cannot be determined simply by fitting a path model, researchers often use these models as a representation of an underlying causal process model. Indeed, causal implications are a vital characteristic of the model’s explanatory value, but these implications are rarely examined directly. When models are hypothesized to be causal, they can be differentiated from one another by examining their causal implications as defined by a combination of model assumptions, data, and estimation procedure. However, the implied causal relationships may not be immediately obvious to researchers, both because of norms against making causal inferences on the basis of non-experimental data in psychology, and because intricate or long-chain causal structures (as in longitudinal panel model designs) can make the necessary mental arithmetic difficult. We present Matrices of Implied Causation (MICs), a tool for easily understanding and reporting a model’s implications for the causal influence of one variable on another, and illustrate their use in model checking and design with examples from the literature. We argue that MICs should become a routine element of interpretation when examining models with complex causal implications, and that they may provide an additional tool for differentiating among models with otherwise similar fit.
Comments are closed.
|