楼主: tulipsliu
2209 43

[学科前沿] 金融学前沿研究R代码实现 [推广有奖]

经济学论述自由撰稿人!

已卖:2752份资源

学科带头人

45%

还不是VIP/贵宾

-

威望
0
论坛币
386045 个
通用积分
527.0498
学术水平
127 点
热心指数
140 点
信用等级
103 点
经验
46986 点
帖子
1773
精华
0
在线时间
2509 小时
注册时间
2007-11-5
最后登录
2024-8-16

初级热心勋章

楼主
tulipsliu 在职认证  发表于 2020-12-7 08:41:44 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
================================
/*  Author :   Daniel Tulips Liu    */
================================





去年认识博士后哥哥,最近十年好友里博士后也大概十多个,其中很多经济学博士好友是访问国外的学者;
几乎认识的朋友全世界主要发达经济体的大学都有去的。



先上传图片:

刚才冷静了一下, 图片呢被我删除了; 也修改了一部分,发论坛不太恰当;
谢谢 R 板块的版主给上一贴评分;

以后我以 Rmarkdown ,带有高亮代码(昨晚已经测试过可以实现)的PDF 文件发论坛。 谢谢,难度级别依然是 硕士研究生级别; 不发博士后难度级别的,阅读权限修改为论坛注册都可以阅读;不好意思;

本文不恰当的话,可以不通过审核,不让这篇帖子发布;



-------------------------------[ Enjoy ]--------------------------------------------------------------------------





二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:R代码 金融学 Daniel Author 经济学博士

网络图.png (102.35 KB)

网络图

网络图

网络图二.png (83.89 KB)

我还不知道的图

我还不知道的图

劳动经济学

沙发
tulipsliu 在职认证  发表于 2020-12-7 11:05:36
$$
\xi_{ij}=\prod_{k=1}^{K}\alpha_{jk}^{q_{ik}}
$$

藤椅
tulipsliu 在职认证  发表于 2020-12-7 11:05:54
If respondent $j$ has mastered all attributes required for item $i$, $\xi_{ij}=1$; if the respondent has not mastered all of the attributes, $\xi_{ij}=0$.

The model allows for slipping and guessing defined in terms of conditional probabilities of answering items correctly ($Y_{ij}=1$) and incorrectly ($Y_{ij}=0$)
$$
s_i=\mathrm{Pr}(Y_{ij}=0\, | \, \xi_{ij}=1)
$$

板凳
tulipsliu 在职认证  发表于 2020-12-7 11:06:25
$$
g_i=\mathrm{Pr}(Y_{ij}=1 \, | \, \xi_{ij}=0).
$$

报纸
tulipsliu 在职认证  发表于 2020-12-7 11:07:05
The slip parameter $s_i$ is the probability that respondent $j$ responds incorrectly to item $i$ although he or she has mastered all required attributes. The guess parameter $g_i$ is the probability that respondent $j$ responds correctly to item $i$ although he or she has not mastered all the required attributes.

It follows that the probability $\pi_{ij}$ of a correct response of respondent $j$ to item $i$ is
$$
\pi_{ij}=\mathrm{Pr}(Y_{ij}=1 \, | \, \boldsymbol{\alpha_j}, s_i, g_i)=(1-s_{i})^{\xi_{ij}}g_{i}^{1-\xi_{ij}}.
$$

地板
tulipsliu 在职认证  发表于 2020-12-7 11:07:32
In [Section 1.1](#overview), respondents' knowledge was defined in terms of $\alpha_{jk}$ and $\xi_{ij}$ which are discrete latent variables. However, **Stan** does not support sampling discrete parameters. Instead, such models that involve bounded discrete parameters can be coded by marginalizing out the discrete parameters (See Chapter 14 in Stan reference 2.15.0. for more information on latent discrete parameters).

The purpose of the DINA model is to estimate an attribute profile of each respondent. In the framework of latent class models, respondents are viewed as belonging to latent classes that determine the attribute profiles. In this sense, $\alpha_{jk}$ and $\xi_{ij}$ can alternatively be expressed at the level of the latent class subscripted by $c$. Each possible attribute profile corresponds to a latent class and the corresponding attribute profiles are labeled $\boldsymbol{\alpha_c}$ with elements $\alpha_{ck}$. The global attribute mastery indicator for respondents in latent class $c$ is defined by
$$
\xi_{ic}=\prod_{k=1}^{K}\alpha_{ck}^{q_{ik}}
$$

7
tulipsliu 在职认证  发表于 2020-12-7 11:09:07
where $\alpha_{ck}$ represents the attribute variable for respondents in latent class $c$ that indicates whether respondents in this class have mastered attribute $k$ $(\alpha_{ck}=1)$ or not $(\alpha_{ck}=0)$, and $q_{ik}$ represents the binary entry in the Q-matrix for item $i$ and attribute $k$. Although $\xi_{ij}$ for respondent $j$ is latent, $\xi_{ic}$ is determined and known for each possible attribute profile as a type of characteristic of each latent class.

Then, the probability of a correct response to item $i$ for a respondent in latent class $c$ is represented as follows:
$$
\pi_{ic}=\mathrm{Pr}(Y_{ic}=1 \, | \, \boldsymbol{\alpha_c}, s_i, g_i)=(1-s_{i})^{\xi_{ic}}g_{i}^{1-\xi_{ic}}
$$

8
tulipsliu 在职认证  发表于 2020-12-7 11:10:07
where $Y_{ic}$ is the observed response to item $i$ of a respondent in latent class $c$.

The marginal probability of a respondent's observed responses across all items becomes a finite mixture model as follows:
$$
\begin{aligned}
\mathrm{Pr}({Y}_j=\boldsymbol{y}_j) &= \sum_{c=1}^{C}\nu_c\prod_{i=1}^I\mathrm{Pr}(Y_{ij}=y_{ij} \, | \, \boldsymbol{\alpha_c}, s_i, g_i) \\
&=\sum_{c=1}^{C}\nu_c\prod_{i=1}^I\pi_{ic}^{y_{ij}}(1-\pi_{ic})^{1-y_{ij}} \\
&= \sum_{c=1}^{C}\nu_c\prod_{i=1}^I{[}(1-s_{i})^{\xi_{ic}}g_{i}^{1-\xi_{ic}}{]}^{y_{ij}}{[}1-\{(1-s_{i})^{\xi_{ic}}g_{i}^{1-\xi_{ic}}\}{]}^{1-y_{ij}}
\end{aligned}
$$

9
tulipsliu 在职认证  发表于 2020-12-7 11:11:50
As we have seen, estimation of finite mixture models in Stan does not involve drawing realizations of the respondents' class membership (i.e., attribute profiles) from the posterior distribution. Therefore, additional Stan code is necessary for obtaining the posterior probabilities of the respondents' class membership.

We will begin by conditioning on the parameters $\nu_c$, ($c=1,...,C$), $s_i$ and $g_i$, ($i=1,...,I$). The parameter $\nu_c$ represents the 'prior' probability that respondent $j$ belongs to class $c$, not conditioning on the respondent's response vector $\boldsymbol{y}_j$. Since classes are defined by the response vectors $\boldsymbol{\alpha_c}$, we can write this probability as
$$
\mathrm{Pr}(\boldsymbol{\alpha_j}=\boldsymbol{\alpha_c})=\nu_c.
$$

The corresponding posterior probability of respondent $j$'s class membership, given the response vector $\boldsymbol{y}_j$ , becomes
$$
\mathrm{Pr}(\boldsymbol{\alpha_j}=\boldsymbol{\alpha_c} \, | \, \boldsymbol{y}_j)=\frac{\mathrm{Pr}(\boldsymbol{\alpha_j}=\boldsymbol{\alpha_c})\mathrm{Pr}(\boldsymbol{Y}_j=\boldsymbol{y}_j \, | \, \boldsymbol{\alpha_c})}{\mathrm{Pr}(\boldsymbol{Y}_j=\boldsymbol{y}_j)}=\frac{\nu_c\prod_{i=1}^I\pi_{ic}^{y_{ij}}(1-\pi_{ic})^{1-y_{ij}}}{\sum_{c=1}^{C}\nu_c\prod_{i=1}^I\pi_{ic}^{y_{ij}}(1-\pi_{ic})^{1-y_{ij}}}.
$$

10
tulipsliu 在职认证  发表于 2020-12-7 11:15:32
From these joint posterior probabilities of the attribute vectors, we can also derive the posterior probabilities of mastery of the individual attributes as
$$
\mathrm{Pr}(\alpha_{jk}=1 \, | \, \boldsymbol{y}_j)=\sum_{c=1}^{C}\mathrm{Pr}(\boldsymbol{\alpha_j}=\boldsymbol{\alpha_c} \, | \, \boldsymbol{y}_j)\times\alpha_{ck}.
$$

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注cda
拉您进交流群
GMT+8, 2026-1-30 05:51