Many times have I seen in seminars tables where researchers list all the robustness checks they’ve done, adding controls, adding fixed effects, mixing them, and interacting them. The coefficient of importance hopefully remains about the same. But then, again, someone will ask for an additional control to be added.
What too often people don’t realize is this: what they are asking for is already controlled for by some fixed effects! And the presenter doesn’t recognize it as well and says “Ah, I guess I’ll have to add that too to the galore of columns I have, (and I’ll have to shrink the font even a bit more to make room …)” Or they have actually already run the regression and jump to that table which shows that the coefficient of interest doesn’t change by a single decimal, and neither does the t-stat/p-value.
If that is the case, they probably have added a control variable that is collinear with some fixed effects (FEs) that are already included in the regression. Because people do not pay attention to Stata’s innocuous comment “note: 2015.year omitted because of collinearity”, they don’t notice and gladly add their new results into their new table.
Let’s do a silly example to illustrate my point. Say that we want to test if people growing up with siblings make more money when they are 45. We have a dataset of where person lived growing up. No one moves anywhere, ever. We also have a dummy variable if they grew up with one or more siblings and then their annual income at age 45 .
We believe that the relationship is linear with some additive noise (and let’s roll with that being the true model too). And, the place where they grew up has an additive effect on future income. The model is then,
where is the coefficient we’re interested in (the effect of growing up with siblings), is the effect of growing up in place , and is the noise/error term.
I run this, make some beautiful slides, and start presenting. Then someone in the audience asks “But we also know the average income in each of these places where they grew up, what if that influences future outcomes of the subjects too which biases your results?”. Denote this new variable (i.e., the average income in while grew up there), the marginal effect of it by , and then the hypothesized new true model is
This might be the true model. It makes sense and looks like it could do something interesting. But, by adding , we will not have made any change what so ever to our estimator of when running a regression. That’s because we have added a control variable that is collinear with the fixed effects. And that makes the model not identified.
If you run this is in any statistical program or language it won’t be able to invert the matrix when performing OLS. In that case, e.g., Stata will remove one FE to make it invertible. So why do I say it isn’t identified? A linear system is invertible if there exists a unique solution. If not, then there can be infinitely many vectors of parameter values that solve the system. Each combination of parameters can explain the data equally well as the true parameters we seek.
I now want to illustrate this point, that the new model (2) has no unique solution:
Assume that we are only considering two cities (). Take an (a value that is off the true value by some value ). Then,
If I now make some clever choices for and , I can recover the model in equation (1). These choices are and . Plug these into the last equation and we get
This is model (1), the equation we started with before a seminar participant asked what would happen if we controlled for average income where the subject grew up.
First of all, we see that we can be wrong by an arbitrary value in our estimate of and still get back to the original model. For any , the combination will return us to that model. The sum of squared errors will be the same using either estimator (the sum will be ), so they equally well explain the data.
This is not saying that (2) is necessarily wrong, I’m saying that we cannot distinguish between (1) and (2) because we cannot identify the parameters in (2). Because of that, and a remark further down, I think it is better to run with model (1).
But more important are conclusions about . Any deviation in the estimator of will increase the sum of squared errors while it is completely unaffected by . So, is still identified in this model and if that’s all we’re interested in, we don’t need to bother with adding controls that are collinear with FEs or if they are identified or not.
You can also include an intercept in either model (1) or (2). Then you will see that the intercept has a different interpretation when using fixed effects. Since some people use the intercept as a benchmark, this matters.
It’s pretty cool that we don’t need to observe all these things that could influence the outcome variable if they are location-specific and we use location-specific FEs (it does matter if people can move around though).
This generalizes to time FEs (if you control for a variable constant within a time period, across the sample, it will be absorbed by time FEs), to individual FEs (say you try to control for parent characteristics in a individual-level panel), etc. If you can write a control variable as (because it doesn’t vary across ), then it will be collinear with fixed effects (unless one is dropped). Fixed effects truly absorb everything that goes on, on average, within the group of observations it covers.
One last remark
A danger when not paying attention to our fixed effects is when we start making interpretations of the control variables after running an OLS. The output from Stata looks good enough, my important coefficient didn’t change, I’m happy. If I ran model (2), controlling for the variable in the two-city scenario, then Stata dropped one of two fixed effects. In this case, say it was that was dropped. Then what Stata actually estimated was
I.e., it didn’t include in the equation to make it possible to estimate and . But the true model is as in (2) (we just cannot estimate it).
By matching of coefficients,
And then . Again, the estimator of is unaffected by whatever collinear control variable is dropped, but the estimators of the controls are not equal to their estimands — the true values.
In this case, if we would ask “but what is then the marginal effect of on ?” and we would look at our estimator of (i.e., ) we would be drawing erroneous conclusions. My conclusion: don’t interpret control variables when using fixed effects. They will be a mix-up of effects from several sources and reflect covariates that were dropped due to collinearity.
If we also include an intercept term, then also will be excluded from the estimations and will have to explain part of that (as will the intercept). The problem gets worse!