楼主: hanszhu
10787 25

[下载]Ebook: Applied Bayesian Modelling [推广有奖]

  • 0关注
  • 34粉丝

院士

26%

还不是VIP/贵宾

-

TA的文库  其他...

Clojure NewOccidental

Job and Interview

Perl资源总汇

威望
7
论坛币
144574975 个
通用积分
69.9075
学术水平
37 点
热心指数
38 点
信用等级
25 点
经验
23228 点
帖子
1869
精华
1
在线时间
797 小时
注册时间
2005-1-3
最后登录
2024-5-1

相似文件 换一批

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币

A Comparison of TSP 4.5 & EViews 3.1

by Clint Cummins, Econometric Programmer for TSP International, rev. 11/10/99

This is intended to be a fairly detailed comparison of the different features of TSP 4.5 and EViews 3.1. It is based on my knowledge of TSP and the EViews manual and web page. I hope it will be a fair comparison, but given my affiliation and greater familiarity with TSP, there will probably be a few errors. Both programs cover standard econometrics fairly well, as do other packages such as SHAZAM, RATS, LIMDEP, SAS, Gauss, etc. To save space, I will try to limit discussion to the differences (i.e. I won't spend much time on standard features like OLS, 2SLS, etc.).

1. INTERFACE

TSP: Command driven, in batch or interactive mode. Commands are verbs with options and arguments, in the form command(options) arguments; . Most options are usually 6-8 characters, and they can be abbreviated. Variable names can be up to 64 characters long.

EViews: Command driven, in both modes. However, all commands can be constructed by following menus and dialog boxes. Command syntax can be: command(options) arguments . No ';' to end a command. The end of a command is (apparently) a hard carriage return (i.e. possible long lines). Most options are single characters. Commands can also be given in "object" syntax: object.command(options) other_arguments .

2. PLATFORMS SUPPORTED

TSP:

  • TSP/GiveWin: Windows 95/98 + NT
  • TSP without GiveWin: DOS, Windows 3.x, Windows 95/98, Windows NT, OS/2.
  • Mac: Apple Macintosh, PowerMac.
  • unix: Sun, IBM RS/6000, DEC Alpha OSF/1, DEC Ultrix, HP 9000, Silicon Graphics, linux/Intel.
  • Other: DEC Alpha/OpenVMS.

EViews:

  • PC: DOS: MicroTSP 7.0 only. Windows 3.x, OS/2: EViews 2.0 only. Windows 95/98, Windows NTs: EViews 3.1.
  • Mac: Apple Macintosh, PowerMac: EViews 1.1 only.
  • unix: none.
  • Other: none.
2.1 GRAPHICS

TSP/GiveWin: editable/customizable graphics, in multiple windows. TSP without GiveWin: fairly simple graphics (color video, hardcopy laser), on PC/Mac.

EViews: more customizable graphics, including dual scale, bar, and pie.

2.2 DATA FILE FORMATS

TSP: Reads Excel files to version 4.0, and Stata .dta files. TSP/GiveWin can read Excel files from version 5 and higher.

EViews: Reads Excel files to version 8 (or 97).

3. ECONOMETRIC FEATURE DIFFERENCES 3.1 TIME SERIES

TSP: AR1 - exact or conditional ML. ARIMA - exact or conditional ML with stationarity/invertibility constraints imposed. ARCH with several initial condition options, fast iterations with second derivatives, robust/QMLE standard errors. Unit root tests include Weighted Symmetric. Cointegration tests include Engle-Granger. Shiller lags (Bayesian PDL), Divisia price and/or quantity indices. Regression diagnostics include exact P-value for Durbin-Watson, Breusch-Pagan heteroskedasticity test, LR test for split-sample heteroskedasticity, LM heteroskedasticity test based on squared fitted values, and Shapiro-Wilk normality test with P-value.

EViews: AR1 - conditional ML only. ARIMA - conditional ML. OLS and 2SLS with ARMA residuals. ARCH includes TARCH, EGARCH, and (apparently) ARMA residual terms. Seasonal adjustment includes additive MA and X-11. VAR includes standard errors for impulse response (analytic or Monte Carlo), and Vector Error Correction model. Exponential smoothing (5 methods). Smoothing in Kalman filter includes standard errors. Regression diagnostics include Chow forecast test, 1-step forecast test, and n-step forecast test.

3.2 SIMULTANEOUS EQUATIONS 3.2.1 GENERAL NONLINEAR SYNTAX/METHODS

TSP: Coefficients in nonlinear equations are names, like FRML EQ1 Y = ALPHA*Y(-1); The CONST command can be used to hold coefficients temporarily fixed. The default iteration methods almost always use analytic first derivatives (and sometimes second derivatives); these often provide more reliable convergence and standard errors than iteration with numeric derivatives. Equations can be substituted into each other to build large models or to impose restrictions.

EViews: Coefficients in nonlinear equations are subscripted names, like EQUATION EQ1 Y = ALPHA(1)*Y(-1) Coefficient names other than C(i) require an additional COEFFICIENT command. Default iteration methods usually use numeric derivatives.

3.2.2 ESTIMATION METHODS

TSP: LIML. GMM includes MASK (excludes instruments from selected equations), kernel includes Parzen, user can supply weighting matrix.

EViews: GMM includes 2 automatic bandwidth selection procedures, kernel includes quadratic spectral, can prewhiten data automatically.

3.3 ROBUST ESTIMATION

TSP: LAD minimizes sum of absolute values of residuals (linear model). LMS minimizes sum of absolute values (or squares) of residuals for median residual and below; a high breakdown estimator that is resistant to up to 50% erroneous data, and is useful for outlier detection.

EViews: Nonparametric regression, including Loess and Nadaraya-Watson local polynomial kernel.

3.4 QUALITATIVE DEPENDANT VARIABLES

TSP: Logit handles multinomial (>=2 choices), conditional and mixed. Probit and Logit include a test for "complete separation". Sample Selection model.

EViews: Logit is binary only (2 choices). Gompit model for binary choice with extreme value errors. Ordered: includes logit and gompit. Censored: includes logistic and extreme value, and upper censoring. Count data: includes "QML". Forecasting of original or latent dependent variable.

(Both programs handle ordered probit, poisson, and negative binomial models).
3.5 GENERAL MAXIMUM LIKELIHOOD

TSP: ML command does general non-recursive maximum likelihood, where the user writes the log likelihood; TSP generates analytic first and second derivatives and maximizes it. Example code is available for a wide range of ML models, such as frontier production, ordered probit, nested logit, negative binomial, bivariate probit, and bivariate tobit.

ML PROC version handles a wider range of models, such as recursive models (GARCH) and hyperparameter estimation in Kalman filter. ML PROC does not use analytic derivatives (just numeric ones).

EViews: LOGL command. Uses numeric derivatives by default; the user may optionally supply equations for analytic first derivatives (no provision for analytic second derivatives). Syntax is natural for recursive models (except possibly for initialization of presample values). Wide range of example codes, such as Multinomial Logit, AR1 exact ML and Sample Selection.

3.6 PANEL/POOLED MODELS

TSP: Fixed and random effects work on large and unbalanced panels. The user supplies the stacked series (for example: GDP and Y), and an ID variable usually specifies which observations belong to which individual.

Additional estimation methods (for balanced panels) are available with GMM(MASK) (one equation per year, with predetermined variables as instruments) and LSQ (minimum distance, testing restrictions with Pi matrix).

EViews: Methods work on small ("up to a couple of hundred cross section units") balanced and unbalanced panels. Individual ID names are required for creating the pooled data group from individual time series; for example GDPUS, GDPFRG, GDPIT, YUS, YFRG, YIT. (I infer "small" panels from this, because you have to name/create all these individual variables first before estimation; i.e. it wouldn't be very practical if you have, say, more than 50 individuals). Pooling estimation methods include AR residuals, cross section weights and SUR weights. Easy to specify some variables or AR terms with individual-specific coefficients.

4. PROGRAMMING FEATURES 4.1 LOOPS

TSP: DO for arithmetic loop. DOT for character loop.

EViews: FOR and WHILE for both arithmetic and character loops. For character, use %character index variable(s), and string substitution syntax. FOR loop can operate on several index variables for each pass through the loop. Arithmetic index of loop must be a !name variable, or be previously declared as a SCALAR.

4.2 IF/THEN/ELSE

TSP: IF/THEN/ELSE work only on scalars. DO and ENDDO bracket multiple statements.

EViews: IF/THEN/ELSE work on both series and scalars. THEN/ELSE/ENDIF bracket multiple statements.

4.3 PROCEDURES/SUBROUTINES

TSP: To define: PROC name arguments; . To use: name args; . Can be defined below where they are used (except in interactive mode). LOCAL command can be used to make sure specific modified variables do not change global variables.

EViews: To define: SUBROUTINE LOCAL name(argtype1 arg1 ...) . To use: CALL name args . RETURN command to exit before end. Output arguments that don't exist have to be declared before the CALL. Scalar expressions can be passed to subroutines. This "local" subroutine is otherwise equivalent to a TSP PROC. There is also the default "global" SUBROUTINE, which is defined without the LOCAL keyword. In a global routine, all created variables are global (instead of local). To make local (scalar) variables, use !name .

4.4 MATRIX ALGEBRA

Caveat: Neither TSP nor EViews is as good with matrices as Gauss, MATLAB, or Ox.

TSP: Uses more operators, vs. function names. For example, the classic OLS formula is MAT B = (X'X)"X'Y;

Operators include element-by-element product and division. Functions include YINV (forced generalized inverse). Generalized inverse is the default (for a symmetric matrix). Matrix types include triangular and diagonal.

EViews: Uses @functionname(a,b,...) for most operations. The OLS formula is MATRIX B = @INV(@TRANSPOSE(X)*X)*@TRANSPOSE(X)*Y or MATRIX B = @INV(@INNER(X))*@TRANSPOSE(X)*Y .

Functions include COLPLACE, @COLUMNEXTRACT, @CONDITION, @COR, @COV, @MAKEDIAGONAL, MATPLACE, @MEAN, @NORM, @ROWEXTRACT, ROWPLACE, @SOLVESYSTEM, @SUBEXTRACT, @SVD, @VAR. "A singular matrix cannot be inverted." (I guess an error message and missing values are the result?)

4.5 OTHER PROGRAMMING FEATURES

TSP: Functions include Log base 10. Can create analytic derivatives of an equation with respect to several arguments.

EViews: Includes functions for string and date conversion/extraction, first- and Nth-order (regular or Log) differencing, percent change, moving average, moving sum, logit, and reciprocal. Function interface to 17 different probability distributions (including inverse and random variables). Named object syntax allows for easy comparison of results from two or more models. Table creation/editing feature for summarizing results from different models.

[此贴子已经被作者于2005-1-22 0:48:13编辑过]

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Modelling Bayesian Applied modelli modell 下载 Applied Bayesian Modelling EBook

沙发
hanszhu 发表于 2005-1-21 03:24:00 |只看作者 |坛友微信交流群

Example #1: Introduction to EViews- Exchange Rate Purchasing Power Parity

This example looks at nominal and real exchange rates and simple tests for purchasing power parity using regression analysis. The the dollar/pound exchange rate is used. Save the EViews workfile UKdata.wf1 (save to floppy disk a:). Note: Do not left-click on the link, but right-click on the link (some browsers will otherwise immediately start the EViews, Excel, Word, or other progams)! 'Save the link' with the name UKdata.wf1. Save the EViews program file Example1.prg (save to floppy disk a:) Start EViews Execute File Open Type=program .prg Drive= a: File Name=example1.prg. Click OK. Run the program file example1.prg (execute Run Example1). Let us see what the program has done. (a) The program loads the workfile UKdata.wf1 containing the basic series dollar/pound exchange rate, UK consumer price index (base year 1995), US consumer price index (base year 1995). (Data source is the IMF's International Financial Statistics CD-ROM.) (b) Always start with a closer examination of the data you have been given. The programming lines in example1.prg generate a table with the basic time series, and a number of graphs. You can look at them by using the command Show <object name>. Carefully look for any irregularities in the data (typing errors, breaks, statistical characteristics: mean, variance, trends). Note: You can do this interactively in EViews. However, the purpose of this example is to show you the programming features of EViews. (c) Next the program will perform a number of data transformations. Taking logarithms and generating first-differences. These transformations are necessary to perform statistical analysis. Furthermore, we calculate the real exchange rate and the relative price that are central to our analysis of the purchasing power parity hypothesis. (d) The program estimates and saves the results of 3 specific empirical tests of purchasing power parity. Look at the results (use the Show <object name> command) and compare the estimates with the original results published in the journal articles. The 3 journal articles are Frenkel (1981), Adler and Lehmann (1983), Abuaf and Jorion (1990). Frenkel (1981) is one of the first studies to test the behavior of exchange rates in the post-Bretton Wood period. To test the hypothesis of purchasing power parity, he estimates the equation ln St = a0 + a1 ln(P/P*)t, where S is the spot exchange rate and P/P* the ratio of the price indices in the domestic and foreign country (note: remember that which country is domestic or foreign depends on the way the exchange rate is defined!). If PPP holds, we expect a1 = 1. (Coefficient a0 is irrelevant, because it depends on an unknown equilibrium level of the real exchange rate and/or the base years of the price indices used. To replicate Frenkel's result more closely we need to rebase the price indices.) Frenkel examines several countries and exchange rates. He finds that the results which involve the US dollar are generally very poor, with insignificant or incorrectly signed coefficients. Results not involving the US dollar are better. Nevertheless, we focus on the result for the dollar/pound and consumer price indices. The data period is June 1973 - July 1979. The estimation method is Two Stage Least Squares (TSLS), with Cochrane-Orcutt correction for first-order serial correlation and instruments constant, time, time squared, lagged values of the dependent and independent variables. The TSLS is used because exchange rate and prices are assumed to be determined simultaneously. Therefore, the price ratio is an endogenous explanatory variable and this requires an instrumental variables estimation method. The OLS estimates would be biased. Frenkel's Table 7 reports the estimation results Constantln(P/P*)SED-Wrho 2.982 (se=2.978)1.070 (se=0.897)0.0291.660.998

the two coefficients and their standard errors, the standard error of the regression, the Durbin-Watson test-statistic for serial correlation, and the estimated first-order autocorrelation coefficient. The PPP model a1 = 1 cannot be rejected with the standard t-test on coefficient a1, but neither can the hypothesis a1 = 0 be rejected. (Note: EViews does not use the Cochrane-Orcutt correction for first-order serial correlation that was used by Frenkel. In addition to the TSLS estimates, the program also generates the OLS estimates, which turn out to be completely different.) Adler and Lehmann (1983) also test the purchasing power parity hypothesis. The Frenkel (1981) approach is seriously flawed, because the nominal exchange rate and the price ratio are nonstationary variables. In general, these regressions may be 'spurious' and biased (Fuller 1976). (Note: Subsequent development of the Engle-Granger cointegration analysis qualifies this statement.) Adler and Lehmann examine the hypothesis that the real exchange follows a random walk. If this were true, it implies that PPP does not hold. The estimated model is yt = SUMi=1,n bi yt-i + vt, where yt is the innovation (i.e. log change) in the real exchange rate. The test is bi = 0 for all i. A model in first-differences is used because the alternative test a1= 1 and a2 = ... = an = 0 in a model xt = SUMi=1,n ai xt-i + vt, with real exchange rate levels xt, is generally biased towards finding b1< 1. (Note: Again, cointegration qualifies this statement.) Using first-differences Adler and Lehmann can employ a standard exclusion F-test on the estimated bi to examine PPP. The model is estimated for many countries and several sample periods. We focus on the results for Great Britain, monthly data 1971.04-1981.05 and 6 lags. Adler and Lehmann's Table IIA reports a F-statistic of 0.13 with an associated marginal significance level less than 0.90 (i.e. the probability that the estimated bi are all zero exceeds 90 percent). This test does not reject the null-hypothesis of a random walk in the real exchange rate. A bad result for the PPP hypothesis. Abuaf and Jorion (1990) also test the hypothesis that the real exchange follows a random walk. Again, if this were true, it implies that PPP does not hold. They argue that previous tests, based on first-differenced variables, have little power to reject the random walk model. They develop a test in levels of the real exchange rate and provide the required critical values of the test statistics. The estimated model is et+1 = k0 + k1 et + SUM j=1,p bj (et-j+1 - et-j), where et is the log real exchange rate. The random walk model implies k1 = 1. This can be tested together with the assumption of either a zero or non-zero k0. Abuaf and Jorion show that, assuming k0 = 0, the test statistics rho = T (k1-1) and tau = (k1-1)/std(k1) have one-sided 10% critical values of -11.1 and -2.57. The model is estimated for ten countries using real exchange rates against the US dollar and monthly data for January 1973-December 1987. Their Table V provides estimates of the model with no lags of real exchange rate changes. They also suggest to use 12 lags in the more general dynamic specification, but do not provide results for the individual countries. For the British pound Abuaf and Jorion report in Table V k0k1rhotau 0.0055 (se=0.0037)0.9767 (se=0.0170)-4.20-1.37

These results do not reject the null-hypothesis of a random walk in the real exchange rate. Therefore, PPP cannot be confirmed. Examine and compare the results from the original studies with the output from the EViews program. Look for the corresponding estimates and test statistics. Experiment with the program and data. For example, try to re-estimate when the data sample is extended and uses all available data. Note: The empirical tests of purchasing power parity presented here are used only to introduce programming in EViews and to introduce econometric analysis on a simple level. The tests and results should not be taken at face value, i.e. as being the final results or as examples of 'best practice' on this specific topic. The literature has since progressed and has provided new evidence on the existence/non-existence of PPP. References Abuaf, N. and P. Jorion, "Purchasing power parity in the long run", Journal of Finance, vol.45 (1) March 1990: 157-74. Adler, M. and B. Lehmann, "Deviations from purchasing power parity in the long run", Journal of Finance, vol.38 (5) December 1983: 1471-87. Frenkel, J.E., "Flexible exchange rates, prices and the role of 'news': Lessons from the 1970's", Journal of Political Economy vol.89 (4) 1981: 665-705.

使用道具

藤椅
hanszhu 发表于 2005-1-21 03:26:00 |只看作者 |坛友微信交流群

Example #2: Stationary White Noise and Nonstationary Random Walks - Background on Unit Roots and Spurious Regression

This example looks at stationary and nonstationary time series. It provides the background for the importance of unit root tests in econometrics. The EViews program and workfile are for EViews 2.0. Save the EViews program file Example2.prg (save to floppy disk a:). Note: Do not left-click on the link, but right-click on the link (some browsers will otherwise immediately start the EViews, Excel, Word, or other progams)! 'Save the link' with the name Example2.prg. Start EViews Execute File Open Type=program .prg Drive= a: File Name=example2.prg. Click OK. Run the program file example2.prg (execute Run Example2). Let us see what the program has done.

A time series that is white noise is stationary: innovations are transient. We define a sequence to be a white-noise process if it is characterized by zero mean constant variance no serial correlation To see the stationarity of a white noise process, we generate one and examine it. Look at the graph. The unit root test (Augmented Dickey Fuller test with 4 lags, no constant term, no trend) formally rejects the null-hypothesis of nonstationarity i.e. a unit root.

Of course, if we generate a second white noise process, we expect it to be completely unrelated to the first. Is it? We examine a scatter plot and formal OLS regression estimate.

A random walk is non-stationary: innovations are permanent. A random walk has no constant mean and no constant variance (these change whenever the sample period is shortened or lengthened). A random walk has high serial correlation. We can generate a random walk by summing up a series of white noise shocks. Note: we are able to get a cumulative sum with these program lines because EView updates cell-by-cell (i.e., when it updates the second cell, the first has already been updated). Look at the graph of the random walk series and the unit root test conducted on it. Compare with the white noise series and comment. Of course, if we independently generate a second random walk process, we expect it to be completely unrelated to the first. Is it? Granger and Newbold (1974) showed that unrelated random walks appear related about 75% of the time. Phillips (1986) showed that the larger your sample, the more likely you are to reject unrelatedness! However, take a look at the two series of regression residuals, and see if you can discover any clues to spurious regression in these.

References Granger, C.W.J. and P. Newbold, 'Spurious regressions in econometrics,' Journal of Econometrics, vol.2 (2) July 1974: 111-20. Phillips, P.C.B., 'Understanding spurious regressions in econometrics,' Journal of Econometrics, vol.33 (3) December 1986: 311-40.

使用道具

板凳
hanszhu 发表于 2005-1-21 03:27:00 |只看作者 |坛友微信交流群

Estimation and Simulation of Small Macro Models - Evaluating Monetary Policy Rules

This example looks at the estimation and simulation of small macroeconomic models. The specific application is the evaluation of monetary policy rules. The EViews program is written for EViews 2.0. Save the Excel 5.0/95 spreadsheet file USqdata.xls (save to floppy disk a:). Note: Do not left-click on the link, but right-click on the link (some browsers will otherwise immediately start the EViews, Excel, Word, or other progams)! 'Save the link' with the name USqdata.xls. Save the EViews program file Example3.prg (save to floppy disk a:) (Alternatively, save this EViews 4 version of the program written by Kenneth Leong.) Start EViews Execute File Open Type=program .prg Drive= a: File Name=example3.prg. Click OK. Run the program file example3.prg (execute Run Example3). Let us see what the program has done. (a) The program loads the data from the Excel spreadsheet file USqdata.xls. This file contains basic quarterly time series for United States Gross Domestic Product in constant 1996 prices (real GDP), GDP Implicit Price Deflator (base 1996=100), real GDP potential or trend, and federal funds rate. (Data sources are provided in the Excel datasheet.) (b) Always start with a closer examination of the data you have been given. The programming lines generate a number of graphs. You can look at them by using the command Show <object name>. Carefully look for any irregularities in the data (typing errors, breaks, statistical characteristics: mean, variance, trends). Note: You can do this interactively in EViews. However, the purpose of this example is to show you the programming features of EViews. (c) Next the program will perform a number of data transformations. Taking logarithms and generating first-differences. These transformations are necessary to perform statistical analysis. (d) The program estimates and saves the results of 2 behavioral equations which make up a small macroeconomic model of the US economy. These equations describe inflation and output. Look at the results (use the Show <object name> command) and compare the estimates with the original results published in the associated journal articles. (e) The EViews model is built by 'appending' behavioral equations and a number of technical or definition equations. A policy rule for the policy variable is also included. Finally, the model is simulated in a historical simulation and stochastic simulations. Results are presented in a number of graphs, series and tables. Rudebusch and Svensson (1999) estimated a small macroeconometric model of the US economy to evaluate a number of monetary policy rules.This model consists of 2 core equations. It is basically a dynamic old-fashioned Keynesian model. The inflation equation represents an accelerationist form of the Phillips curve, in which a) shocks in output cause permanent changes in inflation and b) monetary policy affects inflation only through its effect on the output gap. The output gap equation represents the view that monetary policy shocks temporarily affect output through changes in the real short-term interest rate. (Some writers have, inappropriately, started the convention to refer to this model as the Rudebusch-Svensson model.) Rudebusch and Svensson (2000) report the following estimates for the 2 equations. (1) inflt+1 = 0.675 inflt (0.083)- 0.077 inflt-1 (0.103)+ 0.286 inflt-2 (0.107) + 0.115 inflt-3 (0.088)+ 0.152 ygapt (0.037)+ epst+1

AdjR2= 0.81, SE= 1.084, DW= 1.99

(2) ygapt+1 = 1.161 ygapt (0.079)- 0.259 ygapt-1 (0.077)- 0.088 [ibart - inflbart] (0.032)+ mut+1 AdjR2= 0.90, SE= 0.823, DW= 2.08

The sample period is 1961:1 to 1996:4, coefficient standard errors given in parentheses, estimation method is OLS. The restriction that the sum of the lag coefficients of inflation equals one is imposed on the estimation. All variables were de-meaned prior to estimation, so no constants appear in the equations and the constant equilibrium level of the real interest rate is set equal to zero. Variables definitions: infl is quarterly inflation in the GDP price index in percent at annual rate (i.e. 400*(ln Pt- ln Pt-1)), inflbar is the 4-quarter average of inflation (i.e. 1/4 SUMj=0 to 3 inflt-j), i is the quarterly average federal funds rate in percent per year, ibar is the 4-quarter average federal funds rate (i.e. 1/4 SUMj=0 to 3 it-j), ygap is the relative gap between actual real GDP and potential GDP in percent (i.e. 100*(ln Qt- ln Q*t)), rbar is the constant average real interest rate derived from ibar - inflbar, the residuals eps and mu will be interpretated as economic shocks. The EViews program estimates the 2 core equations and builds the model by adding the required add-on factors and technical or definition equations. A specific monetary policy rule is also added to the model. In this case a basic Taylor-rule for setting the short-term interest rate. Taylor rules. Interest rate rules are usually some variant of the original rule from Taylor (1993). In general, this rule sets it = rbart + inflt + 0.5*(inflt - infl*) + 0.5*ygapt where infl* is the inflation target (for example 2%) and rbar is the equilibrium real interest rate (usually set at some average of historical observations). Variants of this rule differ in the weights for inflation and output gap, forward-looking or lagged variables, and additional variables such as the lagged the interest rate to introduce smoothing objectives. Taylor (1999) provides a summary of recent research on monetary policy rules with interest rates. Historical simulation. In a historical or counterfactual simulation the change in policy (i.e. a new policy rule) is confronted with the estimates of shocks that actually occurred in some historical time period. We can determine, if the model is correct, whether the alternative policy would have performed better or worse than the actual historical policy. A problem concerns robustness and recommendations for future policy. Historical results may depend on chance occurrances in the shocks, which may not be representative of future shocks. Stochastic simulation. Stochastic simulations attempt to address the problem of robustness. New time series of shocks are generated, using assumptions about the distribution of the shocks (for example, a normal distribution with variance derived from estimated historical shocks). The performance of the new policy is examined for 'different economies'. Policy performance. The performance of the policy rule is evaluated on the means and standard deviations of inflation, output gap, and the interest rate produced by the simulations. One might also use some form of utility or loss function to explicitly account for the possible tradeoffs between the different means and variances. Note: The empirical estimates presented here are used only to introduce programming in EViews and to introduce econometric analysis on a simple level. The tests and results should not be taken at face value, i.e. as being the final results or as examples of 'best practice' on this specific topic. There exist many variants of the basic type of model used here, and there exist many models with fundamentally different behavioral equations. References McCallum, B.T., 'Issues in the design of monetary policy,' in J.B. Taylor, M. Woodford (eds.) Handbook of Macroeconomics. North Holland, 1999: . (NBER Working paper #6016 April 1997) Rudebusch, G.D. and L.E.O. Svensson, 'Policy rules for inflation targeting,' in J.B. Taylor (ed.) Monetary Policy Rules. Chicago Univ. Press, 1999: . (NBER Working paper #6512 April 1998) Rudebusch, G.D. and L.E.O. Svensson, 'Eurosystem Monetary Targeting: Lessons from U.S. data,' May 2000. Taylor, J.B., 'The robustness and efficiency of monetary policy rules as guidelines for interest rate setting by the European central bank,' Journal of Monetary Economics, vol.43 (3) June 1999: 655-79.

使用道具

报纸
hanszhu 发表于 2005-1-21 09:06:00 |只看作者 |坛友微信交流群

[下载]Applied Bayesian Modelling

[下载]Applied Bayesian Modelling
8325.rar (2.05 MB, 需要: 50 个论坛币) 本附件包括:
  • Ebook.Congdon P. Applied Bayesian Modelling.pdf

[此贴子已经被gloryfly于2005-8-18 9:47:11编辑过]

使用道具

地板
gloryfly 在职认证  发表于 2005-1-21 19:32:00 |只看作者 |坛友微信交流群
楼上的,你这个有什么特别吗?在QMS的网站上应该可以直接下载的阿~~
你们世俗的人都认为大侠是玉树临风的 难道 大侠就不能矮胖吗?

使用道具

7
hanszhu 发表于 2005-1-21 21:12:00 |只看作者 |坛友微信交流群
Applied Bayesian Modelling Peter Congdon ISBN: 0-471-48695-7 Hardcover 478 pages April 2003
CDN $143.99

使用道具

8
hanszhu 发表于 2005-1-21 21:16:00 |只看作者 |坛友微信交流群

Applied Bayesian Modelling Peter Congdon

Description

The use of Bayesian statistics has grown significantly in recent years, and will undoubtedly continue to do so. Applied Bayesian Modelling is the follow-up to the author’s best selling book, Bayesian Statistical Modelling, and focuses on the potential applications of Bayesian techniques in a wide range of important topics in the social and health sciences. The applications are illustrated through many real-life examples and software implementation in WINBUGS – a popular software package that offers a simplified and flexible approach to statistical modelling. The book gives detailed explanations for each example – explaining fully the choice of model for each particular problem. The book

· Provides a broad and comprehensive account of applied Bayesian modelling.

· Describes a variety of model assessment methods and the flexibility of Bayesian prior specifications.

· Covers many application areas, including panel data models, structural equation and other multivariate structure models, spatial analysis, survival analysis and epidemiology.

· Provides detailed worked examples in WINBUGS to illustrate the practical application of the techniques described. All WINBUGS programs are available from an ftp site.

The book provides a good introduction to Bayesian modelling and data analysis for a wide range of people involved in applied statistical analysis, including researchers and students from statistics, and the health and social sciences. The wealth of examples makes this book an ideal reference for anyone involved in statistical modelling and analysis.

使用道具

9
hanszhu 发表于 2005-1-21 21:17:00 |只看作者 |坛友微信交流群

Applied Bayesian Modelling Peter Congdon

Preface.

The Basis for, and Advantages of, Bayesian Model Estimation via Repeated Sampling.

Hierarchical Mixture Models.

Regression Models.

Analysis of Multi-Level Data.

Models for Time Series.

Analysis of Panel Data.

Models for Spatial Outcomes and Geographical Association.

Structural Equation and Latent Variable Models.

Survival and Event History Models.

Modelling and Establishing Causal Relations: Epidemiological Methods and Models.

Index.

使用道具

10
hanszhu 发表于 2005-1-21 21:33:00 |只看作者 |坛友微信交流群
"It is certainly a fine choice as a supporting reference in either a first or second Bayesian methods course…” (Technometrics, May 2004)

"...has a contemporary feel, with recent developments in financial time series modelling and epidemiology included..." (Short Book Reviews, Vol 23(3), December 2003)

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-5-1 13:38