楼主: ReneeBK
2081 0

Big Data Issue at HLM? [推广有奖]

  • 1关注
  • 62粉丝

VIP

已卖:4897份资源

学术权威

14%

还不是VIP/贵宾

-

TA的文库  其他...

R资源总汇

Panel Data Analysis

Experimental Design

威望
1
论坛币
49635 个
通用积分
55.6937
学术水平
370 点
热心指数
273 点
信用等级
335 点
经验
57805 点
帖子
4005
精华
21
在线时间
582 小时
注册时间
2005-5-8
最后登录
2023-11-26

楼主
ReneeBK 发表于 2014-5-26 05:25:36 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
I am faced with a nested random effects logistic regression model where the sample size at the 2nd level is excessively large (several hundred of thousands). There are three levels: level 1 has about 4 observations per level 2 unit, level 3 has several hundred, but the sample size at level 2 is extremely large.The model has some additional complexities not worth mentioning at the moment.

Employing virtually any estimation method (quadrature, Bayesian) results in either non-convergence due to memory limitations of my machine or days/weeks to achieve convergence.
Often one reads about the minimum size necessary to obtain nearly unbiased estimates, but there is certainly a point at which a sample size can become excessive.

I know if I reduce the number of level 2 units randomly by say 50% or so, I can achieve convergence within a reasonable amount of time. Moreover, the estimates, standard errors, and p-values are virtually the same. I would like references to support what I am finding.

Are there any articles/textbooks which speak to this issue and provide guidelines/recommendations?

  • Subramanian S V, Duncan C, Jones K, 2001, "Multilevel perspectives on modeling census data" Environment and Planning A 33(3) 399 – 417
  • "Aggregation", in Snijders & Bosker, "Multilevel Analysis", Sage 2nd edition 2012.

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Big data issue Data HLM Sue additional thousands several levels method

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群
GMT+8, 2025-12-25 22:14