楼主: 丁善本
3818 10

[问答] 请大家帮我看看eviews做二元模型这种结果是什么原因 [推广有奖]

  • 0关注
  • 0粉丝

已卖:309份资源

本科生

47%

还不是VIP/贵宾

-

威望
0
论坛币
8796 个
通用积分
0
学术水平
0 点
热心指数
0 点
信用等级
0 点
经验
1461 点
帖子
89
精华
0
在线时间
6 小时
注册时间
2007-12-29
最后登录
2019-2-25

楼主
丁善本 发表于 2009-8-3 16:38:00 |AI写论文
500论坛币
如果能帮我顺利解决的,按照悬赏给钱哦,麻烦大家了

123.jpg (33.87 KB)

123.jpg

关键词:EVIEWS 是什么原因 Views Eview view EVIEWS 模型 结果

回帖推荐

yinjb 发表于9楼  查看完整内容

解决方式: For question 1, it works when I change error distribution as Normal Distribution. 改为正态分布后,这个问题解决了。。 It is difficult to tell without looking into details of your work and data. Specification of the log-likelihood function differs with respect to your choice of error distribution and the solution can get more complicated from the numerical point of view. IGARCH has ...

本帖被以下文库推荐

沙发
丁善本 发表于 2009-8-3 16:38:51
发完贴为什么看不见图片

藤椅
kingcain0530 发表于 2009-8-3 16:40:23
看不见图阿

板凳
丁善本 发表于 2009-8-3 16:43:46
图片文凭先放在附件里了,由于不会贴图,请各位见谅

报纸
qingnv 发表于 2009-8-3 23:46:05
看不见图啊~!
签名被屏蔽

地板
kouwenhong 发表于 2009-8-4 17:32:19
一看居然是两年前的帖子,解决了吗?

7
zhileo 企业认证  发表于 2009-8-16 10:29:41
Second Derivative Methods

For binary, ordered, censored, and count models, EViews can estimate the model using Newton-Raphson or quadratic hill-climbing.

Newton-Raphson

Candidate values for the parameters  may be obtained using the method of Newton-Raphson by linearizing the first order conditions  at the current parameter values, :

(32.2)

where  is the gradient vector , and  is the Hessian matrix .

If the function is quadratic, Newton-Raphson will find the maximum in a single iteration. If the function is not quadratic, the success of the algorithm will depend on how well a local quadratic approximation captures the shape of the function.

Quadratic hill-climbing (Goldfeld-Quandt)

This method, which is a straightforward variation on Newton-Raphson, is sometimes attributed to Goldfeld and Quandt. Quadratic hill-climbing modifies the Newton-Raphson algorithm by adding a correction matrix (or ridge factor) to the Hessian. The quadratic hill-climbing updating algorithm is given by:

(32.3)

where  is the identity matrix and  is a positive number that is chosen by the algorithm.

The effect of this modification is to push the parameter estimates in the direction of the gradient vector. The idea is that when we are far from the maximum, the local quadratic approximation to the function may be a poor guide to its overall shape, so we may be better off simply following the gradient. The correction may provide better performance at locations far from the optimum, and allows for computation of the direction vector in cases where the Hessian is near singular.

For models which may be estimated using second derivative methods, EViews uses quadratic hill-climbing as its default method. You may elect to use traditional Newton-Raphson, or the first derivative methods described below, by selecting the desired algorithm in the Options menu.

Note that asymptotic standard errors are always computed from the unmodified Hessian once convergence is achieved.
这段可能有点用。

8
yinjb 发表于 2009-8-16 14:52:02
搜索了一下,看到有人和你同样的问题。
Could someone please help me with this? Thanks
Q1. This is what I got from IGARCH(1,1). Could you please tell me what I should do with this?

Dependent Variable: R_JPY
Method: ML - ARCH (Marquardt) - Student's t distribution
Sample (adjusted): 1/06/1999 1/05/2009
Included observations: 2511 after adjustments
Failure to improve Likelihood after 1 iteration
Unable to evaluate derivatives at current parameter values
Presample variance: backcast (parameter = 0.7)
GARCH = C(2)*RESID(-1)^2 + (1 - C(2))*GARCH(-1)

Variable Coefficient Std. Error z-Statistic Prob.

C -0.004247 NA NA NA

Variance Equation

RESID(-1)^2 1.225047 NA NA NA
GARCH(-1) -0.225047 NA NA NA

T-DIST. DOF 20.00000 NA NA NA

Mean dependent var -0.004247 S.D. dependent var 1.107039

9
yinjb 发表于 2009-8-16 14:55:30
解决方式:
For question 1, it works when I change error distribution as Normal Distribution.

改为正态分布后,这个问题解决了。。

It is difficult to tell without looking into details of your work and data. Specification of the log-likelihood function differs with respect to your choice of error distribution and the solution can get more complicated from the numerical point of view. IGARCH has an additional constraint, which may be too restrictive for the optimization process of the model. On the other hand, it may simply because the errors do not have fat tails.
已有 2 人评分经验 论坛币 学术水平 热心指数 信用等级 收起 理由
胖胖小龟宝 + 10 + 10 热心帮助其他会员
丁善本 + 1 + 1 + 1 好的意见建议

总评分: 经验 + 10  论坛币 + 10  学术水平 + 1  热心指数 + 1  信用等级 + 1   查看全部评分

10
yinjb 发表于 2009-8-16 14:56:21
希望对你有用。
原帖地址,如果感兴趣,可以去看看
http://forums.eviews.com/viewtopic.php?f=4&t=1125

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群
GMT+8, 2025-12-21 17:42