楼主: tulipsliu
8590 294

[学科前沿] [QuantEcon]MATLAB混编FORTRAN语言 [推广有奖]

221
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-27 18:04:03
For $q\rightarrow 1$, Rényi entropy converges to Shannon entropy.
For $0<q<1$  events that have a low probability to occur receive more weight, while for $q>1$ the weights induce a preference for outcomes $j$ with a higher initial probability.
Consequently, Rényi entropy provides a more flexible tool for estimating uncertainty, since different areas of a distribution can be emphasized, depending on the parameter $q$.

Using the escort distribution [for more information, see @BeckS93] $\phi_q(j)=\frac{p^q(j)}{\sum_j p^q(j)}$ with $q >0$ to normalize the weighted distributions, @JKS12 derive the Rényi transfer entropy measure as
$$
  RT_{J \rightarrow I}(k,l) = \frac{1}{1-q} log \left(\frac{\sum_i \phi_q\left(i_t^{(k)}\right)p^q\left(i_{t+1}|i^{(k)}_t\right)}{\sum_{i,j} \phi_q\left(i^{(k)}_t,j^{(l)}_t\right)p^q\left(i_{t+1}|i^{(k)}_t,j^{(l)}_t \right)}\right).
$$

222
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-27 18:04:24
Analogously to (Shannon) transfer entropy, Rényi transfer entropy measures the information flow from $J$ to $I$.
Note that, contrary to Shannon transfer entropy, the calculation of Rényi transfer entropy can result in negative values.
In such a situation, knowing the history of $J$ reveals even greater risk than would otherwise be indicated by only knowing the history of $I$ alone. For more details on this issue see @JKS12.
  
The above transfer entropy estimates are commonly biased due to small sample effects.
A remedy is provided by the effective transfer entropy [@MK02], which is computed in the following way:
  
$$
  ET_{J \rightarrow I}(k,l)=  T_{J \rightarrow I}(k,l)- T_{J_{\text{shuffled}} \rightarrow I}(k,l),
$$

223
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-27 18:04:42
where $T_{J_{\text{shuffled}} \rightarrow I}(k,l)$ indicates the transfer entropy using a shuffled  version of the time series of $J$.
Shuffling implies randomly drawing values from the time series of $J$ and realigning them to generate a new time series.
This procedure destroys the time series dependencies of $J$ as well as the statistical dependencies between $J$ and $I$.
As a result $T_{J_{\text{shuffled}} \rightarrow I}(k,l)$ converges to zero with increasing sample size and any nonzero value of $T_{J_{\text{shuffled}} \rightarrow I}(k,l)$ is due to small sample effects.
The transfer entropy estimates from shuffled data can therefore be used as an estimator for the bias induced by these small sample effects.
To derive a consistent estimator, shuffling is repeated many times and the average of the resulting shuffled transfer entropy estimates across all replications is subtracted from the Shannon or Rényi transfer entropy estimate to obtain a bias corrected effective transfer entropy estimate.

224
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-27 18:05:06
In order to assess the statistical significance of transfer entropy estimates, we rely on  a Markov block bootstrap as proposed by @Dimpfl2013.
In contrast to shuffling, the Markov block bootstrap preserves the dependencies within each time series.
Thereby, it generates the distribution of transfer entropy estimates under the null hypothesis of no information transfer, i.e. randomly drawn blocks of process $J$ are realigned to form a simulated series, which retains the univariate dependencies of $J$ but eliminates the statistical dependencies between $J$ and $I$.
Shannon or Rényi transfer entropy is then estimated based on the simulated time series.
Repeating this procedure yields the distribution of the transfer entropy estimate under the null of no information flow.
The p-value associated with the null hypothesis of no information transfer is given by $1-\hat{q}_{TE}$, where $\hat{q}_{TE}$ denotes the quantile of the simulated distribution that corresponds to the original transfer entropy estimate.

225
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-27 18:05:27
Before we turn to different applications below, we provide a simple example here to demonstrate how the outputs of the different functions look like.
Let us consider a linear relationship between two random variables $X$ and $Y$, where $Y$ depends on $X$ with one lag and $X$ is independent of $Y$:


$$
\begin{split}
x_t & = & 0.2x_{t-1} + \varepsilon_{x,t} \\
y_t & = & x_{t-1} + \varepsilon_{y,t},
\end{split}
$$

226
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-27 18:05:57
Consider again the above example, where the results show a significant information flow from $X$ to $Y$, but not vice versa.
Similar conclusions could be drawn from using a vector autoregressive model and testing for Granger causality.
However, the main advantage of using transfer entropy is that it is not limited to linear relationships.
Consider the following nonlinear relation between $X$ and $Y$, where, again, only $Y$ depends on $X$:
$$
\begin{split}
x_t & = & 0.2x_{t-1} + \varepsilon_{x,t}\\
y_t & = & \sqrt{\mid x_{t-1}\mid} + \varepsilon_{y,t},
\end{split}
$$

227
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-27 18:06:24
To illustrate the use of Rényi transfer entropy, we simulate data for which the dependence of $Y$ on $X$ changes with the level of the innovation.

$$
\begin{split}
x_t & = & 0.2x_{t-1} + \varepsilon_{x,t}\\
y_t & = & \begin{cases} \phantom{0.3}x_{t-1} + \varepsilon_{y,t} \quad \text{if } |\varepsilon_{y,t}| > s \\ 0.2x_{t-1} + \varepsilon_{y,t} \quad \text{if } |\varepsilon_{y,t}| < s
\end{cases},
\end{split}
$$

228
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-28 14:11:09
Even though we really have a vanilla formulation, we can still consider a direct nonconvex formulation. Consider the risk expression:
$$R(\mathbf{w}) = \sum_{i=1}^{N}\left(\frac{w_{i}\left(\boldsymbol{\Sigma}\mathbf{w}\right)_i}{\mathbf{w}^T\boldsymbol{\Sigma}\mathbf{w}}-b_i\right)^{2}.$$

229
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-28 14:11:39
Consider now the risk expression:
$$R(\mathbf{w}) = \sum_{i=1}^{N}\left(\frac{w_{i}\left(\boldsymbol{\Sigma}\mathbf{w}\right)_i}{\sqrt{\mathbf{w}^T\boldsymbol{\Sigma}\mathbf{w}}}-b_i\sqrt{\mathbf{w}^T\boldsymbol{\Sigma}\mathbf{w}}\right)^{2} = \sum_{i=1}^{N}\left(\frac{r_i}{\sqrt{\mathbf{1}^T\mathbf{r}}}-b_i\sqrt{\mathbf{1}^T\mathbf{r}}\right)^{2}.$$

230
tulipsliu(未真实交易用户) 在职认证  发表于 2020-12-28 14:13:28
From Euler's theorem, the volatility of the portfolio $\sigma\left(\mathbf{w}\right)=\sqrt{\mathbf{w}^{T}\boldsymbol{\Sigma}\mathbf{w}}$ can be decomposed as
$$\sigma\left(\mathbf{w}\right)=\sum_{i=1}^{N}w_i\frac{\partial\sigma}{\partial w_i}
= \sum_{i=1}^N\frac{w_i\left(\boldsymbol{\Sigma}\mathbf{w}\right)_{i}}{\sqrt{\mathbf{w}^{T}\boldsymbol{\Sigma}\mathbf{w}}}.$$

The **risk contribution (RC)** from the $i$th asset to the total risk $\sigma(\mathbf{w})$ is defined as
$${\sf RC}_i =\frac{w_i\left(\boldsymbol{\Sigma}\mathbf{w}\right)_i}{\sqrt{\mathbf{w}^{T}\boldsymbol{\Sigma}\mathbf{w}}}$$

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注cda
拉您进交流群
GMT+8, 2026-1-27 07:09