楼主: mgj0423
38143 13

求教R方为负的问题,谢谢指点! [推广有奖]

11
cancenry 发表于 2007-4-15 19:25:00
同意10楼的,不含截距项会出现,古扎拉蒂的书里有

12
蓝色 发表于 2009-9-27 16:20:32
>> Home >> Resources & support >> FAQs >> Missing R-squared for 2SLS/IV   

For two-stage least-squares (2SLS/IV/ivreg) estimates, why is the R-squared statistic not printed in some cases?

For two-stage least squares (2SLS/IV/ivreg) estimates, why is the model sum of squares sometimes negative?

For three-stage least squares (3SLS/reg3) estimates, why are the R-squared and model sum of squares sometimes negative?
Title   Negative and missing R-squared for 2SLS/IV
Authors William Sribney, Vince Wiggins, and David Drukker, StataCorp  
Date April 1999; minor revisions May 2005

--------------------------------------------------------------------------------

Background
Two-stage least squares (2SLS) estimates, or instrumental variables (IV) estimates, are obtained in Stata using the ivreg command.

ivreg sometimes reports no R2 and reports a negative value for the model sum of squares.

Three-stage least squares (3SLS) estimates are obtained using reg3. reg3 sometimes reports a negative R2 and model sum of squares. The discussion below focuses on 2SLS/IV; the issues for 3SLS are the same.

The short answer
Missing R2s, negative R2s, and negative model sum of squares are all the same issue.

Stata’s ivreg command suppresses the printing of an R2 on 2SLS/IV if the R2 is negative, which is to say, if the model sum of squares is negative.

Whether a negative R2 should be reported or simply suppressed is a matter of taste. At any rate, the R2 really has no statistical meaning in the context of 2SLS/IV.

If it makes you feel better, you can compute the R2 yourself from the regression output (see the Example section of the FAQ).

For two-stage least squares, some of the regressors enter the model as instruments when the parameters are estimated. However, since our goal is to estimate the structural model, the actual values, not the instruments for the endogenous right-hand-side variables, are used to determine the model sum of squares (MSS). The model’s residuals are computed over a set of regressors different from those used to fit the model. This means a constant-only model of the dependent variable is not nested within the two-stage least squares model, even though the two-stage model estimates an intercept, and the residual sum of squares (RSS) is no longer constrained to be smaller than the total sum of squares (TSS). When RSS exceeds TSS, the MSS and the R2 will be negative.

The long answer—How can an R2 be negative?
The formula for R-squared is


R2 = MSS/TSS

where

MSS = model sum of squares = TSS − RSS and
TSS = total sum of squares = sum of (y − ybar)2 and
RSS = residual (error) sum of squares = sum of (y − Xb)2

On your output MSS is negative, so R2 would be negative.

MSS is negative because RSS is greater than TSS. RSS is greater than TSS because ybar is a better predictor of y (in the sum-of-squares sense) than Xb!

How can Xb be worse than ybar, especially when the model includes the constant term? At first glance, this seems impossible. But it is possible with the 2SLS/IV model.

Here are the background essentials:

Let Z be the matrix of instruments (say, z1, z2, z3, z4).

Let X be the matrix of regressors (say, y2, y3, z3, z4, where y2 and y3 are endogenous and z3 and z4 are exogenous).

Let y be the endogenous variable of interest. That is, we want to estimate b, where

y = Xb + error

Let P = Z (Z'Z)−1 Z' be the projection matrix into the space spanned by Z.

2SLS/IV gives point estimates

b = ((PX)' PX)-1 (PX)' y

The coefficients are simply those from an ordinary regression, but with the predictors in the columns of PX (the projection of X into Z space).

Let’s assume that you have two endogenous right-hand-side variables (y1 and y2), two exogenous variables (x1 and x2), and two instruments not in the structural equation (z1 and z2). This makes your structural equation


y = (Y)B1 + (X)B2 + e

or

y = b1*y1 + b2*y2 + b3*x1 + b3*x2 + e

(where B1 and B2 are components of the vector of coefficients—b). If you run the following,

. regress y1 x1 x2 z1 z2
. predict yhat1
. regress y2 x1 x2 z1 z2
. predict yhat2
. regress y yhat1 yhat2 x1 x2

you will get exactly the coefficients of the 2SLS/IV model:


. ivreg y (y1 y2 = z1 z2) x1 x2

(but you will get different standard errors).

Now if we computed residuals after

. regress y yhat1 yhat2 x1 x2

the residuals would be

r = y − (PX)b

The sum of squares of these residuals would always be less than the total sum of squares.

But these are not the right residuals for 2SLS/IV. Since we are fitting a structural model, we are interested in the residuals using the actual values of the endogenous variables.

The correct two-stage least squares residuals are

e = y − Xb

Here there is no guarantee that the sum of these residuals squared are less than the total sum of squares. These residuals do not come from a model that nests a constant-only model of y.

An example
Let’s take a simple, and admittedly silly, example from our favorite dataset—auto.dta.

. sysuse auto, clear
(1978 Automobile Data)
. ivreg price (mpg = foreign) headroom

Instrumental variables (2SLS) regression

       Source |       SS       df       MS              Number of obs =      74
-------------+------------------------------           F(  2,    71) =    0.55
        Model |  -202135715     2  -101067857           Prob > F      =  0.5777
     Residual |   837201111    71  11791564.9           R-squared     =       .
-------------+------------------------------           Adj R-squared =       .
        Total |   635065396    73  8699525.97           Root MSE      =  3433.9

------------------------------------------------------------------------------
        price |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
          mpg |   154.4941   244.3001     0.63   0.529    -332.6264    641.6146
     headroom |   836.4137   838.8321     1.00   0.322    -836.1699    2508.997
        _cons |     371.36   7420.742     0.05   0.960    -14425.18     15167.9
------------------------------------------------------------------------------
Instrumented:  mpg
Instruments:   headroom foreign
------------------------------------------------------------------------------

There is your negative model sum of squares (−202135715). The model sum of squares is just the improvement over the sum of squares about the mean given by the full model. But the sum of squared residuals from the model predictions is 837201111, whereas the sum of squared residuals about the mean of price is 635065396. By computing the model sum of square as

    . disp "MSS:  "  %15.0f 635065396 -  837201111
    MSS:  -202135715

we can see that our model actually performs worse than the mean of price. Why didn’t our constant keep this from happening? The coefficients are estimated using an instrument for mpg. Thus the constant need not provide an intercept that minimizes the sum of squared residuals when the actual values of the endogenous variables are used.

Just to be sure, let’s perform the sum of square computations by hand.

To get the sum of squared residuals for our model...

. predict double errs, residuals
     
. gen double errs2 = errs * errs
     
. summarize errs2
     
     Variable |     Obs        Mean   Std. Dev.       Min        Max
-------------+-----------------------------------------------------
        errs2 |      74    1.13e+07   2.01e+07    3017.34   9.57e+07

. display "Ess:  " %15.0f e(rss)
Ess:        837201111

which agrees exactly with the output from ivreg.

To get the total sum of squared residuals about the mean of price,

. summarize price
        
     Variable |     Obs        Mean   Std. Dev.       Min        Max
-------------+-----------------------------------------------------
        price |      74    6165.257   2949.496       3291      15906
     
. gen double pbarErr2 = (price - r(mean))^2

. summarize pbarErr2

     Variable |     Obs        Mean   Std. Dev.       Min        Max
-------------+-----------------------------------------------------
     pbarErr2 |      74     8581965   1.69e+07    .065924   9.49e+07

. display "TSS:  " %15.0f r(sum)
TSS:        635065396

which also agrees exactly with the output from ivreg.

So, our “hand” computations also give a model sum of squares of −202135715.

Is a negative R2 a problem?
What does it mean when RSS is greater than TSS? Does this mean our parameter estimates are no good? Not really. You can easily develop simulations where the parameter estimates from two-stage are quite good while the MSS is negative. Remember why we fit two-stage models. We are interested in the parameters of the structural equation—the elasticity of demand, the marginal propensity to consume, etc. If our two-stage model produces estimates of these parameters with acceptable standard errors, we should be happy—regardless of MSS or R2. If we were interested strictly in projections of the dependent variable, then we should probably consider the reduced form of the model.

Another way of stating this point is that there are models in which in the distribution of 2SLS estimates of the parameters will be well approximated by its theoretical distribution but that the R2 computed from some samples will be negative. There are several ways of illustrating this point. Perhaps the most accessible is via simulation.

We simulate data from the model

(1) y = 1 + − .1*x + e1 + e2

(2) x = w + z + c1 + .5*e1

(3) z = 1.5*c1 + e3

where e1, e2, w, c1, are all independent normal random variables. The c1 term in equations (2) and (3) provide the correlation between x and z. The e1 term in equations (1) and (2) is the source of the correlation between x and the error term (e1 + e2) for y. The coefficient of −.1 is the parameter that we are trying to estimate. We are going to estimate this parameter via 2SLS using ivreg with y as the dependent variable, x as the endogenous variable, and z as the instrument for x. For each simulated sample we construct y, x, and z using independent draws of the standard normal variables e1, e2, w, and c1 and (1)–(3). Then we use

. ivreg y (x = z)

to estimate the coefficient −.1. For each simulated sample, we record the following statistics:

b1 the estimate of the coefficient (−.1)
p the p of the null hypothesis that b1 = −.1
reject is one if p<.05 and 0 otherwise
r2 the computed R2 (missing if mss < 0)
mss the value of the model sum of squares
rho_x1e the correlation between x1 and e=e1+e2
rho_x1z1 the correlation between x1 and z1
fsf the first stage F-statistic
p_fsf the p-value from the first stage F-statistic

The Stata code for drawing 2,000 simulations of this model, estimating the coefficient −.1, computing the statistics of interest, and finally, summarizing the results, is saved in the file negr2.do. Each simulated sample contains 1,000 observations, so the results should not be attributed to a small sample size.

Here is what we obtained when we used summarize to look at the results.

. sum
        
     Variable |       Obs        Mean    Std. Dev.       Min        Max
-------------+--------------------------------------------------------
           b1 |      2000   -.1025507     .054484   -.361239   .0578525
            p |      2000    .4951122    .2863575   .0000638   .9994608
       reject |      2000         .05    .2179995          0          1
           r2 |        48    .0053981    .0050849   .0001775   .0205909
          mss |      2000   -82.03539    49.51527  -317.1851   37.13932
-------------+--------------------------------------------------------
      rho_x1e |      2000    .2344962    .0302909   .1359878    .325926
     rho_x1z1 |      2000    .5544141    .0222491    .483774   .6284751
          fsf |      2000    445.7681    51.80968   304.9355   651.5349
        p_fsf |      2000    1.63e-34    2.24e-33          0   7.55e-32

The results for rho_x1e, rho_x1z1, fsf, and p_fsf indicate that the correlations between the endogenous variable and the error term and between the endogenous variable and its instrument are reasonable and that there is no weak-instrument problem. The results for b1, p, and reject indicate that the mean estimate of the coefficient on x is very close to its true value of −.1 and that there is no size distortion of the test that coefficient on x = −.1. In short, the distribution of the estimates, b1, is very well approximated by its theoretical asymptotic distribution. Together, these results imply that the 2SLS estimator is performing according to the theory in these simulations.

There are only 48 observations on r2 because there are 1,952 observations in which mss < 0.

. count if mss < 0
      1952

Thus the results illustrate that there is at least one model for which the distribution of the 2SLS estimates of the parameters is very well approximated by its asymptotic distribution but that the R2 will be negative in most of the individual samples. To obtain more models that produce the same qualitative results, simply change the coefficient −.1 by a small amount. As one would expect, increasing the coefficient −.1 reduces the fraction of the of simulated samples that produce a negative R2.

13
pingguzh 发表于 2014-3-7 16:58:18
能不能用中文简单解释一下啊,蓝色版主
统计爱好

14
sunny5555555 发表于 2015-9-24 10:36:37
WUNENG 发表于 2007-3-31 21:51
如果是GARCH模型的话很正常,R2本身就可能为负数
为什么GARCH模型的话R2可能为负数?怎么解释呢?谢谢

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群
GMT+8, 2025-12-26 08:54