This is a personal note for my study. There might be some errors, so if you notice them, I'd be happy to let me know.
The notation here follows Causal Inference: What if.
Here is a link to this book.
16.1 The three instrumental conditions
If Z meets the following three conditions, we say that Z is an instrumental variable (IV).
- Z is associated with A (relevance condition)
- Z does not affect Y except through its potential effect on A (exclusion restriction)
- Z and Y do not share causes
Condition (2) can be written as:
Precisely, condition (3) can be expressed as:
Empirically, condition (1) can be verified while condition (2) and (3) cannot.
The three conditions are not sufficient to identify the causal effect in the population. One more condition is required. It will be described in 16.3.
Example of instrumental variables
- A: Smoking, Y: Weights, Z: the price of cigarette
- A: Intake of alcohol, Y: the risk of alcohol, Z: the genetic factor associated with alcohol metabolism
- A: COX-2 selective versus non-selective non-steroidal anti-inflammatory drugs, Y: the outcome on gastrointestinal bleeding, U_z: physician's preference for treatments, Z: last prescription issued by the physician before the current prescription
- Z: access to the treatment, for example, physical distance or travel time to a facility
16.2 The usual IV estimand
When the three conditions above and the additional condition hold, the causal effect is identified as follows:
, which is called the usual IV estimand.
How to compute the IV estimand?
You can estimate each of the four expectations above. That means you fit two saturated linear models such as:
There is another means to estimate the usual IV estimand. It is a two-stage least squares.
- Fit the first-stage treatment model
- Fit the second-stage outcome mode
will always be numerically equivalent to the standard IV estimate.
However, the confidence interval is usually so wide when A and Z are weakly associated. A commonly used rule of thumb is to declare an instrument as weak if the F-statistic from the first-stage model is less than 10.
structural mean model
The two-stage-least-squares model requires investigators to make strong parametric assumptions. Therefore, sometimes structural mean model can be used and estimated via g-estimation.
16.3 A fourth identifying condition: homogeneity
There are four homogeneity conditions.
- constant effect of treatment A on outcome Y across individuals (A is dichotomous)
- equality of the average causal effect of A on Y across levels of Z both in the treated and in the untreated. (Z, A are dichotomous)
- U is not an additive effect modifier (A is dichotomous)
- Z-A association on the additive scale is constant across levels of the confounders U (no restriction on A)
The fourth condition has some testable implication. For a dichotomous A, if some of the confounders are measured, it must be the case that the difference is the same across levels of the measured confounders.
For a continuous A, if we are willing to make additional assumptions about linearity, the variance of the treatment A must be constant across levels of the instrument Z.
Homogeneity seems implausible to some people.
16.4 An alternative fourth condition: monotonicity
We investigate an alternative assumption to homogeneity. We define some counterfactual variables to describe a new condition named monotonicity.
Monotonicity is described as follows:
Assuming this monotonicity property, the usual IV estimand equals
Which is the causal effect in the complier. What complier means is Z and A are compatible: A=Z.
Monotonicity seems plausible, but there are some issues discussed below.
The proportion of the compliers can be small, so if you know the causal effect in the compliers, it would be hard to make a decision to assign the treatment to the overall population.
There are likely to be defiers ().
See Causal Inference: What if.
16.5 The three instrumental conditions revisited
What if the conditions fail to hold?
Condition (1): What if a Z-A association is weak?
- Wide 95% confidence interval
- Amplify the bias caused by violations of condition (2) and (3)
- weak instrumental variable itself causes a bias
Condition (2): What if the absence of a direct effect of the instrument on the outcome does not hold?
If there is a direct effect of the instrument on the outcome, the numerator in the usual IV estimator just above will be incorrectly inflated by the denominator as if it were part of the effect of treatment A.
There is another possibility that there is a direct effect of the instrument on the outcome. For example, continuous treatment A is replaced in the analysis by a coarser version A*. A still exists through which Z affects Y. So, there is a direct effect Z → Y.
Condition (3): What if there is confounding for the effect of the instrument on the outcome
The same inflation happens as condition (2).
16.6 Instrumental variable estimation versus other methods
- Unlike other methods, IV estimation requires modeling assumptions even if infinite data were available.
- Relatively minor violations of the conditions for IV estimation might result in large biases.
- The situation is more restrictive, e.g. a truly dichotomous and time-fixed treatment A, a strong and causal proposed instrument Z and homogeneity or monotonicity.