|
lz你好!这个模型我不熟悉,相应命令是-asclogit-吗?stata help了一下好像也是极大似然求解的,或许可以试试改变估计的方法,使用-technique()-选项。默认是nr,你可以改成dfp,bfgs,nr,或者其任意组合(bhhh官方写了不支持)。
还可以试一下Roodman写的-cmp-命令,更加灵活(不过能不能处理你这个模型我不太清楚),以下是Roodman写的不收敛或收敛很慢的建议:
If you are having trouble achieving (or waiting for) convergence with cmp, these techniques
might help:
1. Changing the search techniques using ml's technique() option, or perhaps the search
parameters, through its maximization options. cmp accepts all these and passes them on
to ml. The default Newton-Raphson search method usually works very well once ml has
found a concave region. The DFP algorithm (tech(dfp)) often works better before then,
and the two can be mixed, as with tech(dfp nr). See the details of the technique()
option at ml.
2. If the estimation problem requires the GHK algorithm (see above), change the number of
draws per observation in the simulation sequence using the ghkdraws() option. By
default, cmp uses twice the square root of the number of observations for which the GHK
algorithm is needed, i.e., the number of observations that are censored in at least
three equations. Raising simulation accuracy by increasing the number of draws is
sometimes necessary for convergence and can even speed it by improving search precision.
On the other hand, especially when the number of observations is high, convergence can
be achieved, at some loss in precision, with remarkably few draws per observations--as
few as 5 when the sample size is 10,000 (Cappellari and Jenkins 2003). And taking more
draws can also greatly extend execution time.
3. If getting many "(not concave)" messages, try the difficult option, which instructs ml to
use a different search algorithm in non-concave regions.
4. If the search appears to be converging in likelihood--if the log likelihood is hardly
changing in each iteration--and yet fails to converge, try adding a nrtolerance(#) or
nonrtolerance option to the command line after the comma. These are ml options that
control when convergence is declared. (See ml_opts, below.) By default, ml declares
convergence when the log likelihood is changing very little with successive iterations
(within tolerances adjustable with the tolerance(#) and ltolerance(#) options) and when
the calculated gradient vector is close enough to zero. In some difficult problems,
such as ones with nearly collinear regressors, the imprecision of floating point numbers
prevents ml from quite satisfying the second criterion. It can be loosened by using the
nrtolerance(#) to set the scaled gradient tolerance to a value larger than its default
of 1e-5, or eliminated altogether with nonrtolerance. Because of the risks of
collinearity, cmp warns when the condition number of an equation's regressor matrix
exceeds 20 (Greene 2000, p. 40).
5. Try cmp's interactive mode, via the interactive option. This allows the user to interrupt
maximization by hitting Ctrl-Break or its equivalent, investigate and adjust the current
solution, and then restart maximization by typing ml max. Techniques for exploring and
changing the current solution include displaying the current coefficient and gradient
vectors (with mat list $ML_b or mat list $ML_g) and running ml plot. See help ml, [R]
ml, and Gould, Pitblado, and Sribney (2006) for details. cmp is slower in interactive
mode.
|