BLPestimatoR - Package for Demand Estimation

Daniel Brunner

2019-05-14

Intro

BLPestimatoR provides an efficient estimation algorithm to perform the demand estimation described in (1995). The routine uses analytic gradients and offers a large number of optimization routines and implemented integration methods as discussed in (2017).

This extended documentation demonstrates the steps of a typical demand estimation with the package:

The use of the algorithm’s building blocks is explained as well. The well-known training data for the cereal market from (2001) is used for demonstration purposes. Loading the package is the very first step of the demand estimation:

library(BLPestimatoR)

Data

Model

Since version 0.1.6 the model is provided in R’s formula syntax and consists of five parts. The variable to be explained is given by observed market shares. Explanatory variables are grouped into four (possibly overlapping) categories separated by |:

Nevo’s model can be translated into the following formula syntax:

The model is directly related to consumer \(i\)’s indirect utility from purchasing cereal \(j\) in market \(t\):

\[u_{ijt}=\sum_{m=1}^M x^{(m)}_{jt} \beta_{i,m}+\xi_{jt}+\epsilon_{ijt} \;\; \text{with}\] \[\beta_{i,m}= \bar{\beta}_m + \sum_{r=1}^R \gamma_{m,r} d_{i,r} + \sigma_m \nu_{i,m}\] and

\[\theta_2 = \begin{pmatrix} \sigma_1 & \gamma_{1,1} & \cdots & \gamma_{1,R} \\ \sigma_2 & \gamma_{2,1} & \cdots & \gamma_{2,R} \\ \vdots & \vdots & \ddots & \vdots \\ \sigma_M & \gamma_{M,1} & \cdots & \gamma_{M,R} \end{pmatrix}\]

Dataframe

Product related variables are collected in the dataframe productData with the following requirements:

A variable that uniquely identifies a product in a market (product_identifier) is optional, but enhances clarity (interpreting elasticities, for example, is much easier). market_identifier and product_identifier together uniquely identify an observation, which is used by the function update_BLP_data to update any variable in the data (in this case product_identifier is mandatory).

In the cereal example, this gives the following dataframe:

head(productData)
#>        price const sugar mushy       share     cdid        IV1        IV2
#> 1 0.07208794     1     2     1 0.012417212 market_1 -0.2159728 0.04057341
#> 2 0.11417849     1    18     1 0.007809387 market_1 -0.2452393 0.05474226
#> 3 0.13239066     1     4     1 0.012994511 market_1 -0.1764587 0.04659597
#> 4 0.13034408     1     3     0 0.005769961 market_1 -0.1214013 0.04876037
#> 5 0.15482331     1    12     0 0.017934141 market_1 -0.1326114 0.03962835
#> 6 0.13704921     1    14     0 0.026601892 market_1 -0.1534998 0.04298842
#>          IV3          IV4         IV5           IV6        IV7         IV8
#> 1  -3.247948 -0.523937695 -0.23246005  0.0068326605  3.1397395 -0.57478633
#> 2 -19.832461 -0.180519694  0.01468859  0.0007988026  0.2876539  0.03293960
#> 3  -2.878531 -0.284219004 -0.21553691 -0.0318693281  2.8862741 -0.74976495
#> 4  -2.059918 -0.328412257 -0.22206995 -0.0314740402  4.4531096  0.25567529
#> 5  -6.137598 -0.138625095 -0.18936521 -0.0437471023 -3.5546508  0.13882114
#> 6  -8.417332  0.007829087 -0.13850121 -0.0210582272 -2.7594799  0.05020052
#>          IV9       IV10       IV11        IV12          IV13       IV14
#> 1  0.2062202  0.1774656  2.1163580 -0.15470824 -0.0057964065 0.01453802
#> 2  0.1051208 -0.2875618 -7.3740909 -0.57641176  0.0129908544 0.07614324
#> 3 -0.4789565  0.2147389  2.1878721 -0.20734643  0.0035092777 0.09178117
#> 4 -0.4729673  0.3560980  2.7045762  0.04074801 -0.0037242656 0.09473168
#> 5 -0.6886784  0.2602726  1.2612419  0.03483558 -0.0005676374 0.10245147
#> 6 -0.2734440  0.1273060  0.3375543  0.02351037  0.0002637777 0.08627983
#>         IV15       IV16       IV17       IV18       IV19       IV20
#> 1 0.12624398 0.06734464 0.06842261 0.03480046 0.12634612 0.03548368
#> 2 0.02973565 0.08786672 0.11050060 0.08778380 0.04987192 0.07257905
#> 3 0.16377308 0.11188073 0.10822551 0.08643905 0.12234707 0.10184248
#> 4 0.13527378 0.08809001 0.10176745 0.10177748 0.11074119 0.10433204
#> 5 0.13063951 0.08481820 0.10107461 0.12516923 0.13346381 0.12111110
#> 6 0.07233581 0.02225051 0.10564387 0.11603699 0.09965063 0.10572660
#>   product_id productdummy
#> 1   cereal_1     product1
#> 2   cereal_2     product2
#> 3   cereal_3     product3
#> 4   cereal_4     product4
#> 5   cereal_5     product5
#> 6   cereal_6     product6

Integration Draws

The arguments related to the numerical integration problem are of particular importance when providing own integration draws and weights, which is most relevant for observed heterogeneity (for unobserved heterogeneity, the straightforward approach is the use of automatic integration).

In the cereal data, both, observed and unobserved heterogeneity, is used for the random coefficients. Starting with observed heterogeneity, user provided draws are collected in a list. Each list entry must be named according to the name of a demographic. Each entry contains the following variables:

In the cereal example, observed heterogeneity is provided as follows (list names correspond to the demographics):

If demographic input (demographicData) is missing, the estimation routine considers only coefficients for unobserved heterogeneity. This can be done by already implemented integration methods via integration_method as shown in the estimation section. In Nevo’s cereal example however, a specific set of 20 draws is given. For this situation, draws are also provided as a list (list names correspond to the formula’s random coefficients and each list entry has a variable market_identifier):

As demonstrated above, list entries for draws of constants must be named (Intercept). Other names of list entries must match the random coefficients specified in the formula.

Calling BLP_data

Calling BLP_data structures and prepares the data for estimation and creates the data object:

The arguments in greater detail:

If you decide to update your data later, you can use the function update_BLP_data.

Estimation

Starting guesses

The provided set of starting guesses par_theta2 is matched with formula input and demographic data:

These requirements are demonstrated with a set of exemplary starting guesses:

Calling estimateBLP

The following code performs the demand estimation:


blp_est <- estimateBLP( blp_data=nevo_data,
                        par_theta2 = theta_guesses,
                        solver_method = "BFGS", solver_maxit = 1000, solver_reltol = 1e-6,
                        standardError = "heteroskedastic",
                        extremumCheck = FALSE ,
                        printLevel = 1 )
#> blp_data were prepared with the following arguments:
#> BLP_data(model = nevos_model, market_identifier = "cdid", product_identifier = "product_id", 
#>     par_delta = "startingGuessesDelta", productData = productData, 
#>     demographic_draws = demographicData, integration_draws = originalDraws, 
#>     integration_weights = rep(1/20, 20), blp_inner_tol = 1e-06, 
#>     blp_inner_maxit = 5000)
#> Starting a BLP demand estimation with  2256  observations in  94  markets...
#> [integration::method  integration::amountDraws 20 ]
#> [blp::inner_tol 1e-06  blp::inner_maxit 5000 ]
#> [solver::method BFGS  solver::maxit 1000  solver::reltol 1e-06 ]
#> gmm objective: 29.3522
#> gmm objective: Inf [delta contains NaN's] 
#> gmm objective: Inf [delta contains NaN's] 
#> gmm objective: Inf [delta contains NaN's] 
#> gmm objective: 22451.18
#> gmm objective: 775.9515
#> gmm objective: 71.6209
#> gmm objective: 26.8851
#> gmm objective: Inf [delta contains NaN's] 
#> gmm objective: 330914
#> gmm objective: 12694.84
#> gmm objective: 311.8842
#> gmm objective: 38.2627
#> gmm objective: 26.9935
#> gmm objective: 26.8196
#> gmm objective: 40772.63
#> gmm objective: 1186.144
#> gmm objective: 103.4133
#> gmm objective: 27.9556
#> gmm objective: 26.1524
#> gmm objective: Inf [delta contains NaN's] 
#> gmm objective: Inf [delta contains NaN's] 
#> gmm objective: 28234.6
#> gmm objective: 756.8709
#> gmm objective: 37.7704
#> gmm objective: 24.7211
#> gmm objective: 5772.739
#> gmm objective: 202.5456
#> gmm objective: 27.2616
#> gmm objective: 23.9898
#> gmm objective: 838.469
#> gmm objective: 32.1783
#> gmm objective: 21.9662
#> gmm objective: 23912.56
#> gmm objective: 244.3514
#> gmm objective: 20.9664
#> gmm objective: 1417.216
#> gmm objective: 56.3625
#> gmm objective: 21.2928
#> gmm objective: 20.6459
#> gmm objective: 24.7031
#> gmm objective: 19.3915
#> gmm objective: 192.3949
#> gmm objective: 18.8625
#> gmm objective: 20.5334
#> gmm objective: 17.7234
#> gmm objective: 16.7846
#> gmm objective: 16.4664
#> gmm objective: 15.9588
#> gmm objective: 15.0231
#> gmm objective: 14.9101
#> gmm objective: 14.8987
#> gmm objective: 14.8961
#> gmm objective: 14.8902
#> gmm objective: 14.8788
#> gmm objective: 14.8479
#> gmm objective: 14.7731
#> gmm objective: 14.5807
#> gmm objective: 14.0873
#> gmm objective: 12.8012
#> gmm objective: 11.9828
#> gmm objective: 9.4672
#> gmm objective: Inf [delta contains NaN's] 
#> gmm objective: Inf [delta contains NaN's] 
#> gmm objective: 150504.2
#> gmm objective: 6437.658
#> gmm objective: 224.7393
#> gmm objective: 13.0643
#> gmm objective: 9.2023
#> gmm objective: Inf [delta contains NaN's] 
#> gmm objective: 10140.17
#> gmm objective: 554.5767
#> gmm objective: 34.6091
#> gmm objective: 10.0322
#> gmm objective: 9.1844
#> gmm objective: 75730.14
#> gmm objective: 2145.751
#> gmm objective: 45.6103
#> gmm objective: 9.9089
#> gmm objective: 9.0889
#> gmm objective: 111102.8
#> gmm objective: 15512.82
#> gmm objective: 389.1241
#> gmm objective: 20.1367
#> gmm objective: 8.7624
#> gmm objective: 766.0638
#> gmm objective: 49.5382
#> gmm objective: 10.0242
#> gmm objective: 8.6598
#> gmm objective: 155.1232
#> gmm objective: 12.3475
#> gmm objective: 8.5964
#> gmm objective: 308.36
#> gmm objective: 10.1499
#> gmm objective: 8.3448
#> gmm objective: 1593.047
#> gmm objective: 18.5902
#> gmm objective: 7.9521
#> gmm objective: 24.6953
#> gmm objective: 7.4967
#> gmm objective: 7.8465
#> gmm objective: 7.4219
#> gmm objective: 7.3411
#> gmm objective: 7.2325
#> gmm objective: 7.0497
#> gmm objective: 6.9818
#> gmm objective: 6.9711
#> gmm objective: 6.9695
#> gmm objective: 6.9696
#> gmm objective: 6.9692
#> gmm objective: 6.9633
#> gmm objective: 6.9462
#> gmm objective: 6.8986
#> gmm objective: 6.8301
#> gmm objective: 6.6517
#> gmm objective: 6.3046
#> gmm objective: 5.7187
#> gmm objective: 5.0274
#> gmm objective: 4.6491
#> gmm objective: 4.5752
#> gmm objective: 54557.62
#> gmm objective: 2335.286
#> gmm objective: 90.1735
#> gmm objective: 7.3992
#> gmm objective: 4.667
#> gmm objective: 4.576
#> gmm objective: 4.575
#> gmm objective: 14656.41
#> gmm objective: 601.8347
#> gmm objective: 32.5354
#> gmm objective: 5.863
#> gmm objective: 4.6245
#> gmm objective: 4.5771
#> gmm objective: 4.5758
#> gmm objective: 4.5759
#> gmm objective: 4.5757
#> gmm objective: 4.5755
#> gmm objective: 4.5754
#> gmm objective: 4.5753
#> gmm objective: 4.5753
#> gmm objective: 4.5753
#> gmm objective: 4.5753
#> gmm objective: 4.5752
#> gmm objective: 4.5752
#> gmm objective: 4.5752
#> gmm objective: 4.5752
#> gmm objective: 4.5752
#> gmm objective: 4.5752
#> gmm objective: 4.5752
#> gmm objective: 4.5752
#> gmm objective: 16969.25
#> gmm objective: 603.3225
#> gmm objective: 23.8438
#> gmm objective: 5.2753
#> gmm objective: 4.5966
#> gmm objective: 4.575
#> gmm objective: 4.5752
#> gmm objective: 4.5753
#> gmm objective: 4.5753
#> gmm objective: 4.5753
#> gmm objective: 4.5753
#> gmm objective: 4.5753
#> gmm objective: 4.5753
#> gmm objective: 4.5753
#> gmm objective: 4.5752
#> gmm objective: 4.5752
#> gmm objective: 4.5752
#> gmm objective: 4.5752
#> gmm objective: 4.5752
#> gmm objective: 4.5752
#> gmm objective: 4.5752
#> gmm objective: 4.5752
#> gmm objective: 4.5752
#> ------------------------------------------ 
#> Solver message: Successful convergence 
#> ------------------------------------------ 
#> Final GMM evaluation at optimal parameters: 
#> gmm objective: 4.5744
#>   theta (RC): 0.55 3.29 -0.01 0.09 
#>   theta (demogr.): 2.3 577.44 -0.38 0.83 0 -29.63 0 0 1.27 0 0.05 -1.35 0 11.03 0 0 
#>   inner iterations: 70 
#>   gradient: -0.0605 0.0017 -0.162 -0.0354 -0.0926 -0.0083 -2.2061 0.2945 -0.1468 0.0797 -1.2189 0.2315 2e-04 
#> Using the heteroskedastic asymptotic variance-covariance matrix...

summary(blp_est)
#> 
#> Data information:
#> 
#>   94 market(s) with 2256 products 
#>   25 linear coefficient(s) (24 exogenous coefficients) 
#>   13 non-linear parameters related to random coefficients 
#>   4 demographic variable(s) 
#> 
#> Estimation results:
#> 
#>  Linear Coefficients
#>                          Estimate Std. Error   t value     Pr(>|t|)
#> (Intercept)            -2.5769264  0.8531569 -3.020460 2.523909e-03
#> price                 -62.1403350 14.5135858 -4.281529 1.856137e-05
#> productdummyproduct10   1.9000897  0.6653478  2.855784 4.293071e-03
#> productdummyproduct11   3.6767952  0.6664428  5.517045 3.447468e-08
#> productdummyproduct12   0.2615994  0.2287542  1.143583 2.527967e-01
#> productdummyproduct13   2.3591835  0.2184213 10.801071 3.402135e-27
#> productdummyproduct14   3.8446786  0.6855662  5.608034 2.046376e-08
#> productdummyproduct15   4.2873240  0.6779154  6.324276 2.544228e-10
#> productdummyproduct16   6.0916947  0.6723137  9.060792 1.295068e-19
#> productdummyproduct17   2.6036321  0.6510521  3.999115 6.357976e-05
#> productdummyproduct18   1.4324858  0.6079004  2.356448 1.845065e-02
#> productdummyproduct19   4.3135147  0.6191372  6.966977 3.238237e-12
#> productdummyproduct2    4.8750516  0.6334282  7.696297 1.400664e-14
#> productdummyproduct20   4.1706988  0.6075700  6.864557 6.669789e-12
#> productdummyproduct21   4.9106550  0.6827987  7.191951 6.387201e-13
#> productdummyproduct22   1.7215961  0.5917418  2.909370 3.621574e-03
#> productdummyproduct23   3.2960677  0.6839398  4.819237 1.441085e-06
#> productdummyproduct24   1.8879979  0.7307294  2.583717 9.774202e-03
#> productdummyproduct3    2.3028803  0.2314068  9.951652 2.480314e-23
#> productdummyproduct4    1.3668373  0.5940217  2.300989 2.139225e-02
#> 
#> ...
#> 
#>  5 estimates are omitted. They are available in the LinCoefficients generated by summary.
#> 
#>  Random Coefficients
#>                           Estimate   Std. Error    t value     Pr(>|t|)
#> unobs_sd*(Intercept)   0.551339802   0.16015070  3.4426313 0.0005760842
#> unobs_sd*price         3.285559241   1.30636137  2.5150462 0.0119016778
#> unobs_sd*sugar        -0.005237547   0.01336128 -0.3919943 0.6950624579
#> unobs_sd*mushy         0.091406886   0.18480221  0.4946201 0.6208683093
#> income*(Intercept)     2.303677743   1.19131427  1.9337280 0.0531465811
#> income*price         577.439894801 264.92313033  2.1796507 0.0292833612
#> income*sugar          -0.384035468   0.11965304 -3.2095755 0.0013293115
#> income*mushy           0.826450257   0.77955169  1.0601609 0.2890713880
#> incomesq*price       -29.627526239  13.80993188 -2.1453782 0.0319226238
#> age*(Intercept)        1.268858974   0.62894244  2.0174485 0.0436487312
#> age*sugar              0.051714014   0.02546546  2.0307513 0.0422802275
#> age*mushy             -1.350839887   0.66343728 -2.0361230 0.0417380079
#> child*price           11.025866516   4.11565257  2.6790081 0.0073840611
#> 
#>  Wald Test
#> 126.7789 on  13 DF, p-value: 9.13936399910884e-21 
#> 
#> Computational Details: 
#>   Solver converged with 173 iterations to a minimum at 4.5744 .
#>   Local minima check: NA
#>   stopping criterion outer loop: 1e-06
#>   stopping criterion inner loop: 1e-06
#>   Market shares are integrated with provided_by_user and 20 draws. 
#>   Method for standard errors: heteroskedastic

The arguments in greater detail:

Many of these arguments have default values. In the following setting you see a minimum of necessary arguments with an automatic generation of integration draws and just unobserved heterogeneity. The summary output informs you about the most important default values.

nevo_data2 <- BLP_data(model = nevos_model,
                      market_identifier="cdid",
                     
                      product_identifier = "product_id",
                      productData = productData,
                      integration_method = "MLHS", 
                      integration_accuracy = 20, integration_seed = 213)
#> Mean utility (variable name: `delta`) is initialized with 0 because of missing or invalid par_delta argument.

blp_est2 <- estimateBLP( blp_data=nevo_data2, printLevel = 1 )
#> blp_data were prepared with the following arguments:
#> BLP_data(model = nevos_model, market_identifier = "cdid", product_identifier = "product_id", 
#>     productData = productData, integration_accuracy = 20, integration_method = "MLHS", 
#>     integration_seed = 213)
#> Starting a BLP demand estimation with  2256  observations in  94  markets...
#> [integration::method  integration::amountDraws 20 ]
#> [blp::inner_tol 1e-09  blp::inner_maxit 10000 ]
#> [solver::method BFGS  solver::maxit 10000  solver::reltol 1e-06 ]
#> gmm objective: 189.9432
#> gmm objective: 5632.612
#> gmm objective: 439.3216
#> gmm objective: 202.5585
#> gmm objective: 190.4644
#> gmm objective: 189.9569
#> gmm objective: 189.9422
#> gmm objective: 366.2611
#> gmm objective: 196.3801
#> gmm objective: 190.0315
#> gmm objective: 189.9122
#> gmm objective: 240.4055
#> gmm objective: 191.6947
#> gmm objective: 189.9326
#> gmm objective: 189.903
#> gmm objective: 189.8537
#> gmm objective: 189.8352
#> gmm objective: 189.8352
#> gmm objective: 212.3727
#> gmm objective: 190.7591
#> gmm objective: 189.8716
#> gmm objective: 189.8367
#> gmm objective: 189.8353
#> gmm objective: 189.8352
#> gmm objective: 189.8352
#> ------------------------------------------ 
#> Solver message: Successful convergence 
#> ------------------------------------------ 
#> Final GMM evaluation at optimal parameters: 
#> gmm objective: 189.8352
#>   theta (RC): 0 -0.32 0 0.06 
#>   theta (demogr.):  
#>   inner iterations: 61 
#>   gradient: 0 0 0.0316 0 
#> Using the heteroskedastic asymptotic variance-covariance matrix...

summary(blp_est2) 
#> 
#> Data information:
#> 
#>   94 market(s) with 2256 products 
#>   25 linear coefficient(s) (24 exogenous coefficients) 
#>   4 non-linear parameters related to random coefficients 
#>   0 demographic variable(s) 
#> 
#> Estimation results:
#> 
#>  Linear Coefficients
#>                          Estimate Std. Error    t value      Pr(>|t|)
#> (Intercept)            -1.7731873  0.1602903 -11.062350  1.910261e-28
#> price                 -30.1150257  1.2899147 -23.346526 1.494838e-120
#> productdummyproduct10   1.8588694  0.1680453  11.061715  1.923839e-28
#> productdummyproduct11   1.7623940  0.1495181  11.787162  4.546047e-32
#> productdummyproduct12  -0.2874741  0.1592495  -1.805180  7.104647e-02
#> productdummyproduct13   2.2500443  0.1511584  14.885338  4.104043e-50
#> productdummyproduct14   1.7795210  0.1626780  10.938918  7.509066e-28
#> productdummyproduct15   2.1598572  0.1600857  13.491879  1.745928e-41
#> productdummyproduct16   3.6903310  0.1718044  21.479845 2.403069e-102
#> productdummyproduct17   0.7890090  0.1696815   4.649940  3.320313e-06
#> productdummyproduct18   0.8752804  0.1637558   5.345035  9.039982e-08
#> productdummyproduct19   1.7678372  0.1493846  11.834129  2.600273e-32
#> productdummyproduct2    2.3665019  0.1535953  15.407387  1.459966e-53
#> productdummyproduct20   2.8086743  0.1635328  17.174992  4.087069e-66
#> productdummyproduct21   2.6809487  0.1689161  15.871478  9.986136e-57
#> productdummyproduct22   0.4722649  0.1546958   3.052862  2.266698e-03
#> productdummyproduct23   1.3755276  0.1541332   8.924278  4.486152e-19
#> productdummyproduct24   2.1728901  0.1699102  12.788464  1.901958e-37
#> productdummyproduct3    1.8016223  0.1488931  12.100102  1.054799e-33
#> productdummyproduct4    0.7639859  0.1468578   5.202216  1.969266e-07
#> 
#> ...
#> 
#>  5 estimates are omitted. They are available in the LinCoefficients generated by summary.
#> 
#>  Random Coefficients
#>                           Estimate Std. Error      t value  Pr(>|t|)
#> unobs_sd*(Intercept) -0.0028937363  1.1219334 -0.002579241 0.9979421
#> unobs_sd*price       -0.3179154684  9.1643440 -0.034690478 0.9723266
#> unobs_sd*sugar       -0.0003841069  0.1139372 -0.003371217 0.9973102
#> unobs_sd*mushy        0.0603775693  2.2758039  0.026530216 0.9788344
#> 
#>  Wald Test
#> 0.0018 on  4 DF, p-value: 0.9999995836557 
#> 
#> Computational Details: 
#>   Solver converged with 25 iterations to a minimum at 189.8352 .
#>   Local minima check: NA
#>   stopping criterion outer loop: 1e-06
#>   stopping criterion inner loop: 1e-09
#>   Market shares are integrated with MLHS and 20 draws. 
#>   Method for standard errors: heteroskedastic

Postestimation

Standard Errors

Standard errors can be computed with three options that control for the unobserved characteristic \(\xi\), which consists of \(N\) elements. \(\Omega\) denotes the variance covariance matrix of \(\xi\).

\[\Omega = \begin{pmatrix} \Sigma_1 & 0 & \dots & 0\\ 0 & \Sigma_2 & & 0\\ \vdots & & \ddots & 0\\ 0 & 0 & 0 & \Sigma_M \\ \end{pmatrix}\]

Elasticities

The following code demonstrates the calculation of elasticities.

The value of the elasticity matrix in row \(j\) and column \(i\) for a variable \(x\), gives the effect of a change in product \(i\)’s characteristic \(x\) on the share of product \(j\).

Modular Examples

Further analysis like incorporating a supply side or performing a merger simulation often requires access to building blocks of the BLP algorithm. The following wrappers insure correct data inputs and access the internal functions of the algorithm.

In the following, you find an example of the contraction mapping and an evaluation of the GMM function at the starting guess:


delta_eval <- getDelta_wrap(
  blp_data = nevo_data,
  par_theta2 = theta_guesses,
  printLevel = 4
)
#> ----------------------
#>   dist: 0.230998
#>   dist: 0.188133
#>   dist: 0.161181
#>   dist: 0.133303
#>   dist: 0.107161
#>   dist: 0.0844808
#>   dist: 0.0657196
#>   dist: 0.0506556
#>   dist: 0.0387915
#>   dist: 0.0295683
#>   dist: 0.0224622
#>   dist: 0.0170219
#>   dist: 0.0128758
#>   dist: 0.00972657
#>   dist: 0.00734024
#>   dist: 0.00553524
#>   dist: 0.00417178
#>   dist: 0.00314287
#>   dist: 0.00236698
#>   dist: 0.00178222
#>   dist: 0.00134169
#>   dist: 0.00100991
#>   dist: 0.000760105
#>   dist: 0.000572046
#>   dist: 0.000430491
#>   dist: 0.00032395
#>   dist: 0.000243769
#>   dist: 0.00019173
#>   dist: 0.000156244
#>   dist: 0.000127321
#>   dist: 0.000103748
#>   dist: 8.45372e-05
#>   dist: 6.88821e-05
#>   dist: 5.61251e-05
#>   dist: 4.573e-05
#>   dist: 3.72598e-05
#>   dist: 3.03582e-05
#>   dist: 2.47347e-05
#>   dist: 2.01528e-05
#>   dist: 1.64195e-05
#>   dist: 1.33778e-05
#>   dist: 1.08995e-05
#>   dist: 8.88033e-06
#>   dist: 7.23518e-06
#>   dist: 5.8948e-06
#>   dist: 4.80272e-06
#>   dist: 3.91296e-06
#>   dist: 3.18804e-06
#>   dist: 2.59741e-06
#>   dist: 2.14713e-06
#>   dist: 1.78121e-06
#>   dist: 1.47765e-06
#>   dist: 1.22582e-06
#>   dist: 1.01691e-06
#>   dist: 9.93231e-07

productData$startingGuessesDelta[1:6]
#> [1] -3.944367 -2.845205 -3.958199 -4.934153 -2.425356 -4.086816
delta_eval$delta[1:6]
#> cereal_1_market_1 cereal_2_market_1 cereal_3_market_1 cereal_4_market_1 
#>         -7.069801         -4.357675         -6.056909         -5.887501 
#> cereal_5_market_1 cereal_6_market_1 
#>         -3.501289         -3.079323
delta_eval$counter
#> [1] 55

gmm <- gmm_obj_wrap(
  blp_data = nevo_data,
  par_theta2 = theta_guesses,
  printLevel = 2
)
#> gmm objective: 29.3544
#>   theta (RC): 0.33 2.45 0.02 0.24 
#>   theta (demogr.): 5.48 15.89 -0.25 1.26 0 -1.2 0 0 0.2 0 0.05 -0.81 0 2.63 0 0 
#>   inner iterations: 55 
#>   gradient: 9.8468 0.3171 363.5319 16.3598 10.6031 0.7028 42.5236 -3.4759 13.4982 -2.0231 10.9331 1.2851 -0.5714 
#> [1] 29.35439

gmm$local_min
#> [1] 29.35439

Printed distances in the contraction mapping are maximum absolute distance between the current vector of mean utilities and the previous one.

For any \(\theta_2\), you can compute predicted shares:

shareObj <- getShareInfo(  blp_data=nevo_data,
                           par_theta2 = theta_guesses,
                           printLevel = 4)
#> Mean utility (delta) is used as provided in the BLP_data() function.
shareObj$shares[1:6]
#> cereal_1_market_1 cereal_2_market_1 cereal_3_market_1 cereal_4_market_1 
#>       0.052856545       0.002175260       0.006105588       0.001529422 
#> cereal_5_market_1 cereal_6_market_1 
#>       0.001571742       0.003654192

The object contains a list of outputs that are useful for further economic analysis. For example, the list element sij contains share probabilities for every individual and needs to be given to calculate elasticities.

The gradient contains two important building blocks as explained in the appendix of (2001):

Both are used to compute the jacobian and are easy to obtain with the package as the following example demonstrates:

# market 2:
derivatives1 <- dstdtheta_wrap(  blp_data=nevo_data,
                                par_theta2 = theta_guesses,
                                 market = "market_2")
#> Mean utility (delta) is used as provided in the BLP_data() function.
derivatives2 <- dstddelta_wrap(  blp_data=nevo_data,
                                par_theta2 = theta_guesses,
                                 market = "market_2")
#> Mean utility (delta) is used as provided in the BLP_data() function.

jac_mkt2 <- -solve(derivatives2)%*%derivatives1

jac_mkt2[1:5,1:4]
#>                      unobs_sd*(Intercept) unobs_sd*price unobs_sd*sugar
#> meanUtility_cereal_1           -0.3935523    0.010936676      -1.430109
#> meanUtility_cereal_2           -0.5385401    0.002959190      -8.608901
#> meanUtility_cereal_3           -0.4567829    0.003935221      -2.612346
#> meanUtility_cereal_4           -0.7766701   -0.022430917      -1.567758
#> meanUtility_cereal_5           -0.7821670   -0.085051524      -2.722660
#>                      unobs_sd*mushy
#> meanUtility_cereal_1    -0.47865751
#> meanUtility_cereal_2    -0.20678073
#> meanUtility_cereal_3    -0.43499042
#> meanUtility_cereal_4    -0.04046717
#> meanUtility_cereal_5    -0.04388127

# all markets
jacobian_nevo <- getJacobian_wrap(blp_data=nevo_data,
                                 par_theta2 = theta_guesses,
                                 printLevel = 2)
#> Mean utility (delta) is used as provided in the BLP_data() function.

jacobian_nevo[25:29,1:4] # compare to jac_mkt2
#>                   unobs_sd*(Intercept) unobs_sd*price unobs_sd*sugar
#> cereal_1_market_2           -0.3935523    0.010936676      -1.430109
#> cereal_2_market_2           -0.5385401    0.002959190      -8.608901
#> cereal_3_market_2           -0.4567829    0.003935221      -2.612346
#> cereal_4_market_2           -0.7766701   -0.022430917      -1.567758
#> cereal_5_market_2           -0.7821670   -0.085051524      -2.722660
#>                   unobs_sd*mushy
#> cereal_1_market_2    -0.47865751
#> cereal_2_market_2    -0.20678073
#> cereal_3_market_2    -0.43499042
#> cereal_4_market_2    -0.04046717
#> cereal_5_market_2    -0.04388127

References

Berry, Steven, James Levinsohn, and Ariel Pakes. 1995. “Automobile Prices in Market Equilibrium.” Econometrica.

Brunner, Daniel, Florian Heiss, Andre Romahn, and Constantin Weiser. 2017. “Reliable Estimation of Random Coefficient Logit Demand Models.” DICE Discussion Paper No 267.

Nevo, Aviv. 2001. “A Practitioner’s Guide to Estimation of Random-Coefficients Logit Models of Demand.” Journal of Economics & Management Strategy.