楼主: sillyfeng
7258 13

[学科前沿] 请问自己写的最大似然函数如何用软件估计参数 [推广有奖]

  • 0关注
  • 7粉丝

VIP

已卖:2622份资源

博士生

34%

还不是VIP/贵宾

-

威望
0
论坛币
2026591 个
通用积分
88.2364
学术水平
6 点
热心指数
11 点
信用等级
4 点
经验
3986 点
帖子
300
精华
0
在线时间
123 小时
注册时间
2005-3-18
最后登录
2025-4-15

楼主
sillyfeng 发表于 2005-12-21 13:38:00 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
<P>请会用的同志给点参考资料阅读。谢谢。购买也可以。</P>
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:最大似然 似然函数 如何用 参考资料 如何 参考资料 软件

回帖推荐

zhaosweden 发表于5楼  查看完整内容

Below I give an example of GAUSS code estimating garch_t model using "optmum". Similar for maxlik. Some times I find optmum is more robust than maxlik.¨ but in general maxlik is easier to use. Note that some part of the code may not be relevant for your purpose. ---------------------- cls; library pgraph,optmum; #include optmum.ext; graphset; data_file2 = "E:\\Timo1\\Stock_index\\SHHCh ...

本帖被以下文库推荐

沙发
yiyo900 发表于 2005-12-21 14:15:00

1.在matlab ,do maximum likelihood ar1 regression时,是这样子

% use cochrane-orcutt estimates as initial values reso = olsc(y,x);

parm = zeros(nvar+2,1); parm(1,1) = reso.rho; % initial rho parm(2:2+nvar-1,1) = reso.beta; % intiial bhat's parm(2+nvar,1) = reso.sige; % initial sigma

oresult = maxlik('ar1_like',parm,[],y,x);

3.ar1_like(),是用来evaluate ols model with AR1 errors log-likelihood

4.不知是否符合你的需要,需要的话我可提供资料供参考

藤椅
lyslz 发表于 2005-12-21 17:05:00

用matlab最优化函数命令

fminsearch('你的似然函数');

板凳
ffeng 发表于 2005-12-21 21:42:00
3楼的回答我觉得不完全是吧,似然应当是要求极大的,所以应该是

fminsearch('-(你的似然函数)');

报纸
zhaosweden 发表于 2005-12-22 01:41:00

Below I give an example of GAUSS code estimating garch_t model using "optmum".

Similar for maxlik. Some times I find optmum is more robust than maxlik.¨

but in general maxlik is easier to use.

Note that some part of the code may not be relevant for your purpose.

----------------------

cls; library pgraph,optmum; #include optmum.ext; graphset; data_file2 = "E:\\Timo1\\Stock_index\\SHHChina.xls"; data_range2 = "a302:a1837"; data = SpreadsheetReadM(data_file2,data_range2,1); data1 = lag1(data); data = data[2:rows(data),.]; data1 = data1[2:rows(data1),.]; R = ln(data./data1); y = R-meanc(R);

T_simu_Setting = 1300; // Number of realizations for each simulation Burnin_Setting = 300; // Burn-in Draw_Setting = 500; // Number of Simulations Trim_or_Not = 0; // 1 for trimming, 0 for Not Trimming file_name_suffix = "_500_Untrimmed_fixed_nu.xls"; File_Ex_Plug_in = "E:\\Timo1\\Confi_Regi\\GARCH_t\\GARCH_t_Plug_in_KR_ACF" $+ file_name_suffix; File_Ex_Para_Estimation_Result = "E:\\Timo1\\Confi_Regi\\GARCH_t\\GARCH_t_para_Estimation_Result" $+ file_name_suffix; File_Ex_para_hat_Isoquant = "E:\\Timo1\\Confi_Regi\\GARCH_t\\GARCH_t_para_hat_Isoquant" $+ file_name_suffix; File_Ex_nonparametric_KR_ACF = "E:\\Timo1\\Confi_Regi\\GARCH_t\\GARCH_t_Nonparametric_KR_ACF" $+ file_name_suffix; File_Ex_Esti_Simu = "E:\\Timo1\\Confi_Regi\\GARCH_t\\GARCH_t_Esti_Simu" $+ file_name_suffix; ?;"File_Ex_Plug_in " File_Ex_Plug_in; ?;"File_Ex_para_hat_Isoquant " File_Ex_para_hat_Isoquant; // I found that I need to run this set of codes again and again, so I moved the above command to the // beginning of the program. This will help prevent me from forgetting change some commands when re-run // program with different settings. 2005,11,06

if Trim_or_Not ==1; " The use of the follow part of code is interesting when this part is commented, "; " We use the original log returns possibly with some ourliers(extreme observations). "; " When the simulation cloud can not capture the empirical measures (becasue of the "; " outliers), we then de-comment this code part, and only analyze the dataset with outliers"; " deleted. It is very likely that with the trimmed dataset, the simulation cloud includes "; " the empirical measure of acf and kurtosis very well"; LowQ=0.005; UpperQ=0.995; e_q=LowQ|UpperQ; y_q=quantile(y,e_q); " quantile returns a column vector "; W=y; Q1=(W.>=Y_q[2,.]); Q2=(W.<=Y_q[1,.]); Q=Q1+Q2; " those elements less than the lower or greater than the upper quantile are set as 1"; Q=1-Q; " it becomes zero "; W=W.*Q; " such elements become 0's "; W=miss(W,0); " 0's become missing values "; y=packr(W); " all missing values are deleted "; endif;

LogReturns = y; // this Log Return we be used in the simulation part of this code

b10 = 1.636e-5; b20 = 0.2; b30 = 0.7; b40 = 8; b0 = b10|b20|b30|b40; Kjia= 4; /* we have 4 parameters to be estimated */ _opalgr= 2; /* _opalgr = 1 Steepest decent, default ** 2 BFGS ** 3 Scaled BFGS ** 4 Self_Scaling DFP ** 5 Newton-Raphson ** 6 polak-Ribiere Conjugate Gradient --*/ _opstep= 3; /* _opstep = 1 steplength=1 ** 2 STEPBT, default ** 3 golden steplength ** 4 Brent -------*/ _opshess = 1; Flag72=1; /* 1 for returning objective function for minimization */ {best72,l,s,retcode} = optmum(&GARCH_N,b0); T = rows(y); Flag72 = 2; /* if 2, return the vector individual log likelihood */ /* this is needed to compute the h(theta0,Y_t), the score */ /* in Hamilton 1994 pp.389 */ h0 = gradp(&GARCH_N,best72); /*** reutrn a T*k gradient vector ****/ /*** k is the number of parameter ****/ J_op = zeros(Kjia,Kjia); for i (1,T,1); J_op=J_op+h0[i,.]'*h0[i,.]; endfor; J_op = J_op/T; /* Now J_op is just the J_op in eqn. [13.4.9] in */ /* Hamilton 1994, pp. 389 */ Flag72 = 3; /*-- if 3 return the log-likelihood --*/ hess0 = hessp(&GARCH_N,best72); J_2d = -hess0/T; /* J_2d is the J_2D in Hamilton 1994, pp. 389 [13.4.8] */ cov = inv(J_2d*inv(J_op)*J_2d); cov = cov/T; covG = inv(J_2d)/T; /* This is to create the var-cov matrix using only the */ /* Information Matrix */ Omega = exp(best72[1]); /* revert back to the natural para. for Omega */ alpha = best72[2]^2; /* revert back to the natural para. for Alpha */ w3 = best72[3]; beta = exp(w3)/(1+exp(w3)); /* revert the natural para. Esti. for beat */ nu = exp(best72[4]);

d1 = exp(best72[1]); /* for Delta Method */ d2 = 2*best72[2]; /* for Delta Method */ d3 = exp(w3)/((1+exp(w3))^2); /* for Delta Method */ d4 = exp(best72[4]); /* for Delta Method */

d72 = d1|d2|d3|d4; /* for Delta Method */ d72 = diagrv(zeros(Kjia,Kjia),d72); cov = d72*cov*d72; /* The Delta Method */ covG = d72*covG*d72;?;?;?;?;?;?; est_1= Omega|alpha|beta|nu; cov_1= cov; cov_1_three = cov[1:3,1:3]; // later on we will fix the nu at some rounded off number, so // when we do the simulation, we only draw random number // by viewing the nu_hat as a fixed value, not random. // est_1 and cov-1 are for simulation part ?; " Estimation Results with Robust standard errors."; ?; " Starter estimate Std.Err. t"; "Omega" b10~Omega~sqrt(cov[1,1])~Omega/sqrt(cov[1,1]); "Alpha" b20~alpha~sqrt(cov[2,2])~alpha/sqrt(cov[2,2]); "beta " b30~beta~sqrt(cov[3,3])~beta/sqrt(cov[3,3]); "nu " b40~nu~sqrt(cov[4,4])~nu/sqrt(cov[4,4]);?; "var-cov matrix" cov; ?; "Correlation matrix:" corrvc(cov); ?; " Estimation Results with standard errors from information matrix "; ?; " Starter estimate Std. Error t"; "Omega" b10~Omega~sqrt(covG[1,1])~Omega/sqrt(covG[1,1]); "Alpha" b20~alpha~sqrt(covG[2,2])~alpha/sqrt(covG[2,2]); "beta " b30~beta~sqrt(covG[3,3])~beta/sqrt(covG[3,3]); "nu " b40~nu~sqrt(covG[4,4])~nu/sqrt(covG[4,4]);?; "var-cov matrix " covG;?; "alpha + Beta = " alpha+beta; //-------------------------------------------

proc Garch_N(b); local T,lhd,home,like,i,h,e,alp0,alpha,beta,nu; alp0 = exp(b[1]); /* alp0 > 0 */ alpha = b[2]*b[2]; /* alpha >=0 */ beta = exp(b[3])/(1+exp(b[3])); /* beta is restricted on the interval (0,1)*/ nu = exp(b[4]); /* degree of freedom parameter > 0 */

T = rows(y); h = zeros(T,1); lhd = zeros(T,1); e = y; /* error term in the mean eqn */ h[1,1] = e'*e/T;

/* Initial value of conditional variance is set at */ /* the sample variance of the error term in the */ /* mean equation */ /* h[1,1]=alp0/(1-beta); */ /* as an alternative, the initial value for the */ /* conditional variance can be set at the unconditional */ /* varaicne */ i=2; do while i<=T; h[i,1] = alp0+alpha*e[i-1,1]^2+beta*h[i-1,1]; lhd[i,1] = ln(gamma((nu+1)/2))-ln(gamma(nu/2))-0.5*ln(pi*nu)-0.5*ln(h[i,1])-0.5*(nu+1)*ln(1+e[i,1]^2/(nu*h[i,1])); i=i+1; endo; /* lhd is the vector of sample log-likehood */ like = sumc(lhd); /* sumc becomes the sample log-likelihood, a scalar */ if Flag72 == 1; HOME = -like; /* for minimization with optmum */ elseif Flag72 == 2; HOME = lhd; /* for score function computation */ elseif Flag72 == 3; HOME = like; /* return the sample log-likelihood function, a scalar */ endif; retp(home); endp;

[此贴子已经被作者于2005-12-22 1:44:00编辑过]

已有 1 人评分经验 论坛币 收起 理由
胖胖小龟宝 + 10 + 10 热心帮助其他会员

总评分: 经验 + 10  论坛币 + 10   查看全部评分

地板
sillyfeng 发表于 2005-12-22 09:29:00

感谢各位的热情帮助,可是有没有用stata的人啊?matlab和Gauss我都不会用,只会用sas和stata。另外我想用的似然函数需要模拟,不知道能不能用stata编程完成。再次感谢各位的热情帮助。好温暖啊!(菜鸟发自内心的感激!)

7
zhaosweden 发表于 2005-12-24 22:49:00

I think stata should be able enough to implement your idea.(but i do not know how)

From my experience, the most important is that you understand the theory at the first step.

Of course, you need some experience with your favourite software package. The choice of software is not crucial.

My supervisor works with GAUSS for his whole life. The idea is choose one and stick to it.

8
arlionn 在职认证  发表于 2008-11-30 09:35:00
以下是引用sillyfeng在2005-12-21 13:38:00的发言:

请会用的同志给点参考资料阅读。谢谢。购买也可以。

stata 完全可以,你已购买的stata高级视频中就有专门的一讲介绍这个内容。

若仍有困难,arlionn@163.com 

9
kiwis 发表于 2011-1-30 02:26:24
5 喽不是人= =

10
年华似水007 发表于 2011-1-30 08:23:19
推荐高铁梅老师的书《计量经济建模与EVIEWS应用》第二版的第八章。

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群
GMT+8, 2025-12-5 16:35