楼主: tulipsliu
4248 294

[学科前沿] [QuantEcon]MATLAB混编FORTRAN语言 [推广有奖]

211
tulipsliu 在职认证  发表于 2020-12-26 14:39:24 |只看作者 |坛友微信交流群
$$
\begin{align}
h(x) &= \int \left( \frac{f(x) + g(x)}{1+ f^{2}(x)}
+ \frac{1+ f(x)g(x)}{\sqrt{1 - \sin x}}
\right) \, dx\label{E:longInt}\\
&= \int \frac{1 + f(x)}{1 + g(x) } \, dx
- 2 \tan^{-1}(x-2)\notag
\end{align}
$$

使用道具

212
tulipsliu 在职认证  发表于 2020-12-26 14:39:55 |只看作者 |坛友微信交流群
$$
f(x)=
\begin{cases}
-x^{2}, &\text{if $x < 0$;}\\
\alpha + x, &\text{if $0 \leq x \leq 1$;}\\
x^{2}, &\text{otherwise.}
\end{cases}
$$

使用道具

213
tulipsliu 在职认证  发表于 2020-12-26 15:38:08 |只看作者 |坛友微信交流群
$$\left| \frac{a + b}{2} \right|, \quad \left\| A^{2} \right\|,
\quad \left( \frac{a}{2}, b \right],
\quad \left. F(x) \right|_{a}^{b}$$

使用道具

214
tulipsliu 在职认证  发表于 2020-12-26 15:38:45 |只看作者 |坛友微信交流群
$$
\begin{alignat*}{2}
(A + B C)x &+ &C &y = 0,\\
Ex &+ &(F + G)&y = 23.
\end{alignat*}
$$

使用道具

215
tulipsliu 在职认证  发表于 2020-12-26 15:39:08 |只看作者 |坛友微信交流群
$$
f(x) \overset{ \text{def} }{=} x^{2} - 1
$$

使用道具

216
tulipsliu 在职认证  发表于 2020-12-26 15:40:42 |只看作者 |坛友微信交流群
$$
\begin{alignat}{4}
a_{11}x_1 &+ a_{12}x_2 &&+ a_{13}x_3 &&
&&= y_1,\\
a_{21}x_1 &+ a_{22}x_2 && &&+ a_{24}x_4
&&= y_2,\\
a_{31}x_1 & &&+ a_{33}x_3 &&+ a_{34}x_4
&&= y_3.
\end{alignat}
$$

使用道具

217
tulipsliu 在职认证  发表于 2020-12-26 15:41:30 |只看作者 |坛友微信交流群
$$
\left(
\begin{matrix}
a + b + c & uv & x - y & 27\\
a+b &u+v&z & 1340
\end{matrix}
\right) =
\left(
\begin{matrix}
1 & 100 & 115 & 27\\
201 & 0 & 1 & 1340
\end{matrix}
\right)
$$

使用道具

218
tulipsliu 在职认证  发表于 2020-12-27 18:03:09 |只看作者 |坛友微信交流群
Strictly speaking, Shannon's formula  is a measure for uncertainty, which increases with the number of bits needed to optimally encode a sequence of realizations of $J$.
In order to measure the information flow between two processes, Shannon entropy is combined with the concept of the Kullback-Leibler distance [@KL51] and by assuming that the underlying processes evolve over time according to a Markov process [@schreiber2000].
Let $I$ and $J$ denote two discrete random variables with marginal probability distributions $p(i)$ and $p(j)$ and joint probability distribution $p(i,j)$, whose dynamical structures correspond to stationary Markov processes of order $k$ (process $I$) and $l$ (process $J$).
The Markov property implies that the probability to observe $I$ at time $t+1$ in state $i$ conditional on the $k$ previous observations is $p(i_{t+1}|i_t,...,i_{t-k+1})=p(i_{t+1}|i_t,...,i_{t-k})$.
The average number of bits needed to encode the observation in $t+1$ if the previous $k$ values are known is given by
  
$$
  h_I(k)=- \sum_i p\left(i_{t+1}, i_t^{(k)}\right) \cdot log \left(p\left(i_{t+1}|i_t^{(k)}\right)\right),
$$

使用道具

219
tulipsliu 在职认证  发表于 2020-12-27 18:03:26 |只看作者 |坛友微信交流群
where $i^{(k)}_t=(i_t,...,i_{t-k+1})$. $h_J(l)$ can be derived analogously for process $J$.
In the bivariate case, information flow from process $J$ to process $I$ is measured by quantifying the deviation from the generalized Markov property $p(i_{t+1}| i_t^{(k)})=p(i_{t+1}| i_t^{(k)},j_t^{(l)})$ relying on the Kullback-Leibler distance [@schreiber2000].
Thus, (Shannon) transfer entropy is given by
  
$$
  T_{J \rightarrow I}(k,l) = \sum_{i,j} p\left(i_{t+1}, i_t^{(k)}, j_t^{(l)}\right) \cdot log \left(\frac{p\left(i_{t+1}| i_t^{(k)}, j_t^{(l)}\right)}{p\left(i_{t+1}|i_t^{(k)}\right)}\right),
$$

使用道具

220
tulipsliu 在职认证  发表于 2020-12-27 18:03:44 |只看作者 |坛友微信交流群
where $T_{J\rightarrow I}$ consequently measures the information flow from $J$ to $I$ ( $T_{I \rightarrow J}$ as a measure for the information flow from $I$ to $J$ can be derived analogously).

Transfer entropy can also be based on Rényi entropy [@R70] rather than Shannon entropy.
Rényi entropy introduces a weighting parameter $q>0$ for the individual probabilities $p(j)$ and can be calculated as
$$
  H^q_J = \frac{1}{1-q} log \left(\sum_j p^q(j)\right).
$$

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注cda
拉您进交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-5-9 08:30