楼主: tulipsliu
8485 294

[学科前沿] [QuantEcon]MATLAB混编FORTRAN语言 [推广有奖]

91
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-16 19:28:27
$$\textbf{y}_t \sim \mathcal{N}(\mu_t, \sigma^2_t), \quad t=2,\dots,T$$
$$\mu_t = \alpha + \phi^P_{p=1} \textbf{y}_{t-p}, \quad t=(p+1),\dots,T$$
$$\epsilon = \textbf{y} - \mu$$
$$\delta = (\epsilon > 0) \times 1$$
$$\sigma^2_t = \omega + \sum^Q_{q=1} \theta_{q,1} \delta_{t-1} \epsilon^2_{t-1} + \theta_{q,2} (1-\delta_{t-1}) \epsilon^2_{t-1}$$
$$\alpha \sim \mathcal{N}(0, 1000)$$
$$\phi_p \sim \mathcal{N}(0, 1000), \quad p=1,\dots,P$$
$$\omega \sim \mathcal{HC}(25)$$
$$\theta_{q,j} \sim \mathcal{U}(0, 1), \quad q=1\dots,Q, \quad j=1,\dots,2$$

92
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-16 19:29:02
$$\textbf{y}_t \sim \mathcal{N}(\mu_t, \sigma^2_t), \quad t=2,\dots,T$$
$$\mu_t = \alpha + \phi^P_{p=1} \textbf{y}_{t-p} + \delta_{t-1} \gamma_1 \sigma^2_{t-1} + (1 - \delta_{t-1}) \gamma_2 \sigma^2_{t-1}, \quad t=(p+1),\dots,T$$
$$\epsilon = \textbf{y} - \mu$$
$$\delta = (\epsilon > 0) \times 1$$
$$\sigma^2_t = \omega + \sum^Q_{q=1} \theta_{q,1} \delta_{t-1} \epsilon^2_{t-1} + \theta_{q,2} (1-\delta_{t-1}) \epsilon^2_{t-1}$$
$$\alpha \sim \mathcal{N}(0, 1000)$$
$$\gamma_k \sim \mathcal{N}(0, 1000), \quad k=1,\dots,2$$
$$\phi_p \sim \mathcal{N}(0, 1000), \quad p=1,\dots,P$$
$$\omega \sim \mathcal{HC}(25)$$
$$\theta_{q,j} \sim \mathcal{U}(0, 1), \quad q=1\dots,Q, \quad j=1,\dots,2$$

93
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-16 19:30:34
$$\textbf{y} \sim \mathcal{N}(\mu, \sigma^2)$$
$$\mu_t = \alpha + \phi \textbf{y}_{t-1} + \sum^K_{k=1} \textbf{X}_{t,k} \beta \lambda^{\textbf{L}[t,k]}, \quad k=1,\dots,K, \quad t=2,\dots,T$$
$$\mu_1 = \alpha + \sum^K_{k=1} \textbf{X}_{1,k} \beta \lambda^{\textbf{L}[1,k]}, \quad k=1,\dots,K$$
$$\alpha \sim \mathcal{N}(0, 1000)$$
$$\beta \sim \mathcal{N}(0, 1000)$$
$$\lambda \sim \mathcal{U}(0, 1)$$
$$\phi \sim \mathcal{N}(0, 1000)$$
$$\sigma \sim \mathcal{HC}(25)$$

94
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-16 19:31:10
The \pkg{LaplacesDemon} package is capable of estimating any Bayesian model for which the likelihood is specified\footnote{Examples of more than 100 Bayesian models may be found in the ``Examples'' vignette that comes with the \pkg{LaplacesDemon} package. Likelihood-free estimation is also possible by approximating the likelihood, such as in Approximate Bayesian Computation (ABC).}. To use the \pkg{LaplacesDemon} package, the user must specify a model. Let's consider a linear regression model, which is often denoted as:

$$\textbf{y} \sim \mathcal{N}(\mu, \sigma^2)$$
$$\mu = \textbf{X}\beta$$

The dependent variable, $\textbf{y}$, is normally distributed according to expectation vector $\mu$ and scalar variance $\sigma^2$, and expectation vector $\mu$ is equal to the inner product of design matrix \textbf{X} and transposed parameter vector $\beta$.

For a Bayesian model, the notation for the residual variance, $\sigma^2$, has often been replaced with the inverse of the residual precision, $\tau^{-1}$. Here, $\sigma^2$ will be used. Prior probabilities are specified for $\beta$ and $\sigma$ (the standard deviation, rather than the variance):

$$\beta_j \sim \mathcal{N}(0, 1000), \quad j=1,\dots,J$$
$$\sigma \sim \mathcal{HC}(25)$$

Each of the $J$ $\beta$ parameters is assigned a vague\footnote{`Traditionally, a vague prior would be considered to be under the class of uninformative or non-informative priors. 'Non-informative' may be more widely used than 'uninformative', but here that is considered poor English, such as saying something is `non-correct' when there's a word for that \dots `incorrect'. In any case, uninformative priors do not actually exist \citep{irony97}, because all priors are informative in some way. These priors are being described here as vague, but not as uninformative.} prior probability distribution that is normally-distributed according to $\mu=0$ and $\sigma^2=1000$. The large variance or small precision indicates a lot of uncertainty about each $\beta$, and is hence a vague distribution. The residual standard deviation $\sigma$ is half-Cauchy-distributed according to its hyperparameter, scale=25. When exploring new prior distributions, the user is encouraged to use the \code{is.proper} function to check for prior propriety.

95
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-16 19:34:29
A numerical approximation algorithm iteratively maximizes the logarithm of the unnormalized joint posterior density as specified in this \code{Model} function. In Bayesian inference, the logarithm of the unnormalized joint posterior density is proportional to the sum of the log-likelihood and logarithm of the prior densities:

$$\log[p(\Theta|\textbf{y})] \propto \log[p(\textbf{y}|\Theta)] + \log[p(\Theta)]$$

where $\Theta$ is a set of parameters, $\textbf{y}$ is the data, $\propto$ means `proportional to'\footnote{For those unfamiliar with $\propto$, this symbol simply means that two quantities are proportional if they vary in such a way that one is a constant multiplier of the other. This is due to an unspecified constant of proportionality in the equation. Here, this can be treated as `equal to'.}, $p(\Theta|\textbf{y})$ is the joint posterior density, $p(\textbf{y}|\Theta)$ is the likelihood, and $p(\Theta)$ is the set of prior densities.

96
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-16 19:37:12
Finally, everything is put together to calculate the logarithm of the unnormalized joint posterior density. The expectation vector is the inner product of the design matrix, and the transpose of the vector are used to estimate the sum of the log-likelihoods, where:

$$\textbf{y} \sim \mathcal{N}(\mu, \sigma^2)$$

and as noted before, the logarithm of the unnormalized joint posterior density is:

$$\log[p(\Theta|\textbf{y})] \propto \log[p(\textbf{y}|\Theta)] + \log[p(\Theta)]$$

97
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-16 19:42:11
$$\textbf{y} \sim \mathcal{N}(\mu, \sigma^2)$$
$$\mu = \alpha + \beta_1 \textbf{x} + \beta_2 (\textbf{x} - \theta)[(\textbf{x} - \theta) > 0]$$
$$\alpha \sim \mathcal{N}(0, 1000)$$
$$\beta \sim \mathcal{N}(0, 1000)$$
$$\sigma \sim \mathcal{HC}(25)$$
$$\theta \sim \mathcal{U}(-1.3, 1.1)$$

98
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-16 20:02:31
This form can be stated as the unnormalized joint posterior being proportional to the likelihood times the prior. However, the goal in model-based Bayesian inference is usually not to summarize the unnormalized joint posterior distribution, but to summarize the marginal distributions of the parameters. The full parameter set $\Theta$ can typically be partitioned into

$$\Theta = \{\Phi, \Lambda\}$$

where $\Phi$ is the sub-vector of interest, and $\Lambda$ is the complementary sub-vector of $\Theta$, often referred to as a vector of nuisance parameters. In a Bayesian framework, the presence of nuisance parameters does not pose any formal, theoretical problems. A nuisance parameter is a parameter that exists in the joint posterior distribution of a model, though it is not a parameter of interest. The marginal posterior distribution of $\phi$, the parameter of interest, can simply be written as

$$p(\phi | \textbf{y}) = \int p(\phi, \Lambda | \textbf{y}) d\Lambda$$

In model-based Bayesian inference, Bayes' theorem is used to estimate the unnormalized joint posterior distribution, and finally the user can assess and make inferences from the marginal posterior distributions.

99
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-16 20:03:29
$$\textbf{Y}_{t,j} \sim \mathcal{N}(\mu_{t,j}, \sigma^2_j), \quad t=1,\dots,T, \quad j=1,\dots,J$$
$$\mu_{t,j} = \alpha_j + \sum^P_{p=1} \Gamma_{1:J,j,p}\Phi_{1:J,j,p}\textbf{Y}_{t-p,j}$$
$$\alpha_j \sim \mathcal{N}(0, 1000)$$
$$\Gamma_{i,k,p} \sim \mathcal{BERN}(0.5), \quad i=1,\dots,J, \quad k=1,\dots,J, \quad p=1,\dots,P$$
$$(\Phi_{i,k,p} | \Gamma_{i,k,p}) \sim (1 - \Gamma_{i,k,p})\mathcal{N}(0, 0.01) + \Gamma_{i,k,p}\mathcal{N}(0, 10), \quad i=1,\dots,J, \quad k=1,\dots,J, \quad p=1,\dots,P$$
$$\sigma_j \sim \mathcal{HC}(25)$$

100
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-16 20:04:55
For example, suppose one asks the question: what is the probability of going to Hell, conditional on consorting (or given that a person consorts) with Laplace's Demon\footnote{This example is, of course, intended with humor.}. By replacing $A$ with $Hell$ and $B$ with $Consort$, the question becomes

$$\Pr(\mathrm{Hell} | \mathrm{Consort}) = \frac{\Pr(\mathrm{Consort} | \mathrm{Hell}) \Pr(\mathrm{Hell})}{\Pr(\mathrm{Consort})}$$

Note that a common fallacy is to assume that $\Pr(A | B) = \Pr(B | A)$, which is called the conditional probability fallacy.

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注cda
拉您进交流群
GMT+8, 2025-12-21 17:19