Multiple estimations can be a chore to set up. Varying left-hand-sides (LHS), right-hand-sides (RHS) or samples require either many lines of code, or loops with formula/data manipulation. The package `fixest`

simplifies multiple estimations by providing an optimized procedure along with a clear and concise syntax.

On the one hand, stepwise functions facilitate the sequential inclusion of variables in the RHS or in the fixed-effects part of the formula. On the other hand, intuitive methods are introduced to manipulate the results from multiple estimations, making it easy to visualize or export any wanted set of results.

What does a multiple estimation look like? Let’s give it a try:

```
base = iris
names(base) = c("y1", "y2", "x1", "x2", "species")
res_multi = feols(c(y1, y2) ~ x1 + csw(x2, x2^2) | sw0(species), base, fsplit = ~species)
```

With the previous line of code (90 characters long), we have just performed 32 estimations: eight different functional forms on four different samples.

The previous code leads to the following results:

`summary(res_multi, "compact", se = "hetero")`

```
## sample lhs fixef rhs (Intercept)
## 1 Full sample y1 1 x1 + x2 4.19*** (0.104)
## 2 Full sample y1 1 x1 + x2 + I(x2^2) 4.27*** (0.101)
## 3 Full sample y1 species x1 + x2
## 4 Full sample y1 species x1 + x2 + I(x2^2)
## 5 Full sample y2 1 x1 + x2 3.59*** (0.103)
## 6 Full sample y2 1 x1 + x2 + I(x2^2) 3.68*** (0.0969)
## 7 Full sample y2 species x1 + x2
## 8 Full sample y2 species x1 + x2 + I(x2^2)
## 9 setosa y1 1 x1 + x2 4.25*** (0.474)
## 10 setosa y1 1 x1 + x2 + I(x2^2) 4*** (0.504)
## 11 setosa y1 species x1 + x2
## 12 setosa y1 species x1 + x2 + I(x2^2)
## 13 setosa y2 1 x1 + x2 2.89*** (0.416)
## 14 setosa y2 1 x1 + x2 + I(x2^2) 2.82*** (0.423)
## 15 setosa y2 species x1 + x2
## 16 setosa y2 species x1 + x2 + I(x2^2)
## 17 versicolor y1 1 x1 + x2 2.38*** (0.423)
## 18 versicolor y1 1 x1 + x2 + I(x2^2) 0.323 (1.44)
## 19 versicolor y1 species x1 + x2
## 20 versicolor y1 species x1 + x2 + I(x2^2)
## 21 versicolor y2 1 x1 + x2 1.25*** (0.275)
## 22 versicolor y2 1 x1 + x2 + I(x2^2) 0.097 (1.01)
## 23 versicolor y2 species x1 + x2
## 24 versicolor y2 species x1 + x2 + I(x2^2)
## 25 virginica y1 1 x1 + x2 1.05. (0.539)
## 26 virginica y1 1 x1 + x2 + I(x2^2) -2.39 (2.04)
## 27 virginica y1 species x1 + x2
## 28 virginica y1 species x1 + x2 + I(x2^2)
## 29 virginica y2 1 x1 + x2 1.06. (0.572)
## 30 virginica y2 1 x1 + x2 + I(x2^2) 1.1 (1.76)
## 31 virginica y2 species x1 + x2
## 32 virginica y2 species x1 + x2 + I(x2^2)
## x1 x2 I(x2^2)
## 1 0.542*** (0.0761) -0.32. (0.17)
## 2 0.719*** (0.0815) -1.52*** (0.307) 0.348*** (0.0748)
## 3 0.906*** (0.0759) -0.006 (0.163)
## 4 0.9*** (0.0767) 0.29 (0.408) -0.0879 (0.117)
## 5 -0.257*** (0.0664) 0.364* (0.142)
## 6 -0.0301 (0.0778) -1.18*** (0.313) 0.446*** (0.0737)
## 7 0.155* (0.0735) 0.623*** (0.114)
## 8 0.148. (0.0755) 0.951* (0.472) -0.0973 (0.125)
## 9 0.399 (0.325) 0.712. (0.418)
## 10 0.405 (0.325) 2.51. (1.47) -2.91 (2.1)
## 11 0.399 (0.325) 0.712. (0.418)
## 12 0.405 (0.325) 2.51. (1.47) -2.91 (2.1)
## 13 0.247 (0.305) 0.702 (0.56)
## 14 0.248 (0.304) 1.27 (2.39) -0.911 (3.28)
## 15 0.247 (0.305) 0.702 (0.56)
## 16 0.248 (0.304) 1.27 (2.39) -0.911 (3.28)
## 17 0.934*** (0.166) -0.32 (0.364)
## 18 0.901*** (0.164) 3.01 (2.31) -1.24 (0.841)
## 19 0.934*** (0.166) -0.32 (0.364)
## 20 0.901*** (0.164) 3.01 (2.31) -1.24 (0.841)
## 21 0.0669 (0.0949) 0.929*** (0.244)
## 22 0.048 (0.0993) 2.8. (1.65) -0.695 (0.583)
## 23 0.0669 (0.0949) 0.929*** (0.244)
## 24 0.048 (0.0993) 2.8. (1.65) -0.695 (0.583)
## 25 0.995*** (0.0898) 0.00706 (0.205)
## 26 0.994*** (0.0881) 3.5. (2.09) -0.87 (0.519)
## 27 0.995*** (0.0898) 0.00706 (0.205)
## 28 0.994*** (0.0881) 3.5. (2.09) -0.87 (0.519)
## 29 0.149 (0.107) 0.535*** (0.122)
## 30 0.149 (0.108) 0.503 (1.56) 0.00798 (0.388)
## 31 0.149 (0.107) 0.535*** (0.122)
## 32 0.149 (0.108) 0.503 (1.56) 0.00798 (0.388)
```

This vignette now details how to perform multiple estimations for multiple: LHSs, RHSs, fixed-effects, or samples. It then comes to describe the various methods to access the results.

To perform an estimation on multiple LHS, simply wrap the different LHS in `c()`

:

`etable(feols(c(y1, y2) ~ x1 + x2, base))`

```
## model 1 model 2
## Dependent Var.: y1 y2
##
## (Intercept) 4.19*** (0.097) 3.59*** (0.094)
## x1 0.542*** (0.069) -0.257*** (0.067)
## x2 -0.320* (0.160) 0.364* (0.155)
## _______________ ________________ _________________
## S.E. type Standard Standard
## Observations 150 150
## R2 0.76626 0.21310
## Adj. R2 0.76308 0.20240
```

To estimate multiple RHS (or fixed-effects), you need to use a specific set of functions: the *stepwise* functions. There are four of them: `sw`

, `sw0`

, `csw`

, `csw0`

.

`sw`

: this function is replaced sequentially by each of its arguments. For example,`y ~ x1 + sw(x2, x3)`

leads to two estimations:`y ~ x1 + x2`

and`y ~ x1 + x3`

.`sw0`

: identical to`sw`

but first adds the empty element. E.g.`y ~ x1 + sw0(x2, x3)`

leads to three estimations:`y ~ x1`

,`y ~ x1 + x2`

and`y ~ x1 + x3`

.`csw`

: it stands for*cumulative*stepwise. It adds to the formula each of its arguments sequentially. E.g.`y ~ x1 + csw(x2, x3)`

will become`y ~ x1 + x2`

and`y ~ x1 + x2 + x3`

.`csw0`

: identical to`csw`

but first adds the empty element. E.g.`y ~ x1 + csw0(x2, x3)`

leads to three estimations:`y ~ x1`

,`y ~ x1 + x2`

and`y ~ x1 + x2 + x3`

.

The stepwise functions can be applied both to the linear part and the fixed-effects part of the formula. Note that at most one stepwise function can be applied per part.

Here is an example:

`etable(feols(y1 ~ csw(x1, x2) | sw0(species), base, cluster = ~species))`

```
## model 1 model 2 model 3 model 4
## Dependent Var.: y1 y1 y1 y1
##
## (Intercept) 4.31** (0.192) 4.19** (0.244)
## x1 0.409** (0.040) 0.542* (0.112) 0.905** (0.076) 0.906** (0.081)
## x2 -0.320 (0.171) -0.006 (0.126)
## Fixed-Effects: --------------- -------------- --------------- ---------------
## species No No Yes Yes
## _______________ _______________ ______________ _______________ _______________
## S.E.: Clustered by: species by: species by: species by: species
## Observations 150 150 150 150
## R2 0.75995 0.76626 0.83672 0.83673
## Within R2 -- -- 0.57178 0.57179
```

As you can see, if the stepwise functions are in both parts, there will be as many estimations as the cardinal product of the two parts.

To perform split sample estimations, use either the argument `split`

or `fsplit`

. The argument `split`

accepts a variable that will be treated as a factor, and an estimation will be performed for each sub-sample defined by each level of this variable. The argument `fsplit`

is identical but first adds the full sample.

`etable(feols(y1 ~ x1 + x2, base, fsplit = ~species))`

```
## model 1 model 2 model 3
## Sample (species) Full sample setosa versicolor
## Dependent Var.: y1 y1 y1
##
## (Intercept) 4.19*** (0.097) 4.25*** (0.411) 2.38*** (0.449)
## x1 0.542*** (0.069) 0.399 (0.296) 0.934*** (0.169)
## x2 -0.320* (0.160) 0.712 (0.487) -0.320 (0.402)
## ________________ ________________ _______________ ________________
## S.E. type Standard Standard Standard
## Observations 150 50 50
## R2 0.76626 0.11173 0.57432
## Adj. R2 0.76308 0.07393 0.55620
## model 4
## Sample (species) virginica
## Dependent Var.: y1
##
## (Intercept) 1.05* (0.514)
## x1 0.995*** (0.089)
## x2 0.007 (0.179)
## ________________ ________________
## S.E. type Standard
## Observations 50
## R2 0.74689
## Adj. R2 0.73612
```

You can combine multiple LHS to multiple RHS to multiple fixed-effects to multiple samples. The total number of estimations is always equal to the cardinal product of the total number of parts.

We’ve just seen how to perform multiple estimations, now let’s see how to manipulate them. First a multiple estimation is a `fixest_multi`

object with its own set of methods. We can access its elements by using keys. There are five keys: `sample`

, `lhs`

, `rhs`

, `fixef`

, and `iv`

.

`res_multi[sample = 1]`

returns all the results for the first sample. `res_multi[lhs = .N]`

returns all the results for the last dependent variable (the special variable `.N`

can be used to refer to the last element). etc.

You can combine different keys: `res_multi[sample = -1, lhs = 1]`

will select all results for all samples but the first, and for the first dependent variable.

Note that these arguments accept regular expressions, so `res_multi[fixef = "spe"]`

returns all results for which the character string `"spe"`

is contained in the fixed-effects part of the formula.

The results in a `fixest_multi`

object have a specific order, organized in a tree. By default the order is \(sample \rightarrow lhs \rightarrow fixef \rightarrow rhs \rightarrow iv\).

Changing the order of the results is important to organize/export them. By default, when one accesses `fixest_multi`

objects the results are reordered according to the order of the arguments used.

For instance, `res_mutli[rhs = 1:.N, fixef = 1:.N]`

will place the RHS at the root of the tree followed by the fixed-effects. Then the sample and the LHS will follow.

The arguments accept logical values: `res_multi[fixef = TRUE, sample = FALSE]`

will keep *all* results but will place `fixef`

as the root and `sample`

as the last leaf.

This subsetting can then be used to easily obtain the appropriate set of results and ordering:

`etable(res_multi[lhs = 1, fixef = 1, rhs = TRUE, sample = -1])`

```
## model 1 model 2 model 3
## Sample (species) setosa versicolor virginica
## Dependent Var.: y1 y1 y1
##
## (Intercept) 4.25*** (0.411) 2.38*** (0.449) 1.05* (0.514)
## x1 0.399 (0.296) 0.934*** (0.169) 0.995*** (0.089)
## x2 0.712 (0.487) -0.320 (0.402) 0.007 (0.179)
## x2 square
## ________________ _______________ ________________ ________________
## S.E. type Standard Standard Standard
## Observations 50 50 50
## R2 0.11173 0.57432 0.74689
## Adj. R2 0.07393 0.55620 0.73612
## model 4 model 5 model 6
## Sample (species) setosa versicolor virginica
## Dependent Var.: y1 y1 y1
##
## (Intercept) 4.00*** (0.489) 0.323 (1.79) -2.39 (2.17)
## x1 0.405 (0.296) 0.901*** (0.171) 0.994*** (0.088)
## x2 2.51 (2.01) 3.01 (2.84) 3.50 (2.15)
## x2 square -2.91 (3.16) -1.24 (1.04) -0.870 (0.534)
## ________________ _______________ ________________ ________________
## S.E. type Standard Standard Standard
## Observations 50 50 50
## R2 0.12786 0.58696 0.76071
## Adj. R2 0.07098 0.56002 0.74510
```

Defining the standard-errors at estimation time, by using the arguments `se`

or `cluster`

, can be useful to obtain a coherent set of standard-errors across results, especially if the fixed-effects are modified (which will modify the default clustering of standard-errors across models).

IV estimations return a regular `fixest`

object. The `summary`

applied to it however can return a `fixest_multi`

object. This is the case when both the first and second stage regressions are requested using the argument `stage = 1:2`

. You can then cherry-pick the results as before using, e.g. `res[iv = 1]`

. Note, importantly, that the index refers to the order of the results and 1 here does not mean the first stage.

The objects returned by `fixest`

estimations are large. They contain the necessary information to apply various methods without incurring additional computing costs. This is particularly true for clustering the standard-errors for instance. Stated differently speed is privileged over memory usage.

The problem when it comes to multiple estimations is that it is very easy to perform many many estimations leading to a ballooning of object size possibly getting out of control at some point. To circumvent this issue, here’s what to do:

- use the arguments
`se`

or`cluster`

to get a summary of the results with the appropriate standard-errors at estimation time, - use the argument
`lean = TRUE`

.

This will perform the estimation with the appropriate standard errors (point 1) and clean any large object from the results (point 2).

The drawback of this is that you won’t be able to apply some methods to the results (like changing the type of standard-errors, `predict`

, `resid`

, etc). But the amount of memory saved can be considerable.